Dec 05 06:46:05 crc systemd[1]: Starting Kubernetes Kubelet... Dec 05 06:46:05 crc restorecon[4747]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 06:46:05 crc restorecon[4747]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 05 06:46:06 crc kubenswrapper[4780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 06:46:06 crc kubenswrapper[4780]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 05 06:46:06 crc kubenswrapper[4780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 06:46:06 crc kubenswrapper[4780]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 06:46:06 crc kubenswrapper[4780]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 05 06:46:06 crc kubenswrapper[4780]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.027072 4780 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030836 4780 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030861 4780 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030867 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030871 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030892 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030897 4780 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030902 4780 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030907 4780 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030912 4780 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030916 4780 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030919 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030923 4780 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030927 4780 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030931 4780 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030935 4780 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030939 4780 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030942 4780 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030946 4780 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030951 4780 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030956 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030960 4780 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030964 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030968 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030971 4780 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030976 4780 feature_gate.go:330] unrecognized feature gate: Example Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030980 4780 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030984 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030987 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030992 4780 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.030996 4780 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031000 4780 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031003 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031007 4780 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031016 4780 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031020 4780 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031024 4780 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031027 4780 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031031 4780 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031034 4780 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031039 4780 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031043 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031047 4780 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031051 4780 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031054 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031058 4780 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031061 4780 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031066 4780 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031070 4780 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031074 4780 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031078 4780 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031082 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031086 4780 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031092 4780 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031097 4780 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031101 4780 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031105 4780 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031109 4780 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031113 4780 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031117 4780 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031120 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031124 4780 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031128 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031132 4780 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031135 4780 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031139 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031142 4780 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031145 4780 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031149 4780 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031152 4780 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031155 4780 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.031160 4780 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031391 4780 flags.go:64] FLAG: --address="0.0.0.0" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031403 4780 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031410 4780 flags.go:64] FLAG: --anonymous-auth="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031416 4780 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031421 4780 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031425 4780 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031430 4780 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031436 4780 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031440 4780 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031444 4780 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031448 4780 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031454 4780 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031458 4780 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031462 4780 flags.go:64] FLAG: --cgroup-root="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031466 4780 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031471 4780 flags.go:64] FLAG: --client-ca-file="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031475 4780 flags.go:64] FLAG: --cloud-config="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031479 4780 flags.go:64] FLAG: --cloud-provider="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031483 4780 flags.go:64] FLAG: --cluster-dns="[]" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031488 4780 flags.go:64] FLAG: --cluster-domain="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031492 4780 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031497 4780 flags.go:64] FLAG: --config-dir="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031501 4780 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031506 4780 flags.go:64] FLAG: --container-log-max-files="5" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031511 4780 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031516 4780 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031520 4780 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031525 4780 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031529 4780 flags.go:64] FLAG: --contention-profiling="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031534 4780 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031538 4780 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031543 4780 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031547 4780 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031552 4780 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031556 4780 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031560 4780 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031564 4780 flags.go:64] FLAG: --enable-load-reader="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031568 4780 flags.go:64] FLAG: --enable-server="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031572 4780 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031577 4780 flags.go:64] FLAG: --event-burst="100" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031581 4780 flags.go:64] FLAG: --event-qps="50" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031586 4780 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031590 4780 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031594 4780 flags.go:64] FLAG: --eviction-hard="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031599 4780 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031604 4780 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031610 4780 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031614 4780 flags.go:64] FLAG: --eviction-soft="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031619 4780 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031623 4780 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031627 4780 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031632 4780 flags.go:64] FLAG: --experimental-mounter-path="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031636 4780 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031640 4780 flags.go:64] FLAG: --fail-swap-on="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031644 4780 flags.go:64] FLAG: --feature-gates="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031655 4780 flags.go:64] FLAG: --file-check-frequency="20s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031659 4780 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031663 4780 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031668 4780 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031672 4780 flags.go:64] FLAG: --healthz-port="10248" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031676 4780 flags.go:64] FLAG: --help="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031679 4780 flags.go:64] FLAG: --hostname-override="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031683 4780 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031687 4780 flags.go:64] FLAG: --http-check-frequency="20s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031692 4780 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031696 4780 flags.go:64] FLAG: --image-credential-provider-config="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031700 4780 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031705 4780 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031709 4780 flags.go:64] FLAG: --image-service-endpoint="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031713 4780 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031717 4780 flags.go:64] FLAG: --kube-api-burst="100" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031722 4780 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031726 4780 flags.go:64] FLAG: --kube-api-qps="50" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031731 4780 flags.go:64] FLAG: --kube-reserved="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031735 4780 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031739 4780 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031743 4780 flags.go:64] FLAG: --kubelet-cgroups="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031747 4780 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031751 4780 flags.go:64] FLAG: --lock-file="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031755 4780 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031760 4780 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031764 4780 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031770 4780 flags.go:64] FLAG: --log-json-split-stream="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031775 4780 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031780 4780 flags.go:64] FLAG: --log-text-split-stream="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031784 4780 flags.go:64] FLAG: --logging-format="text" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031788 4780 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031792 4780 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031797 4780 flags.go:64] FLAG: --manifest-url="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031801 4780 flags.go:64] FLAG: --manifest-url-header="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031806 4780 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031810 4780 flags.go:64] FLAG: --max-open-files="1000000" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031815 4780 flags.go:64] FLAG: --max-pods="110" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031819 4780 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031824 4780 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031828 4780 flags.go:64] FLAG: --memory-manager-policy="None" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031832 4780 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031837 4780 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031841 4780 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031861 4780 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031871 4780 flags.go:64] FLAG: --node-status-max-images="50" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031890 4780 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031894 4780 flags.go:64] FLAG: --oom-score-adj="-999" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031899 4780 flags.go:64] FLAG: --pod-cidr="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031903 4780 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031910 4780 flags.go:64] FLAG: --pod-manifest-path="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031914 4780 flags.go:64] FLAG: --pod-max-pids="-1" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031919 4780 flags.go:64] FLAG: --pods-per-core="0" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031923 4780 flags.go:64] FLAG: --port="10250" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031927 4780 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031934 4780 flags.go:64] FLAG: --provider-id="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031938 4780 flags.go:64] FLAG: --qos-reserved="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031942 4780 flags.go:64] FLAG: --read-only-port="10255" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031946 4780 flags.go:64] FLAG: --register-node="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031950 4780 flags.go:64] FLAG: --register-schedulable="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031954 4780 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031961 4780 flags.go:64] FLAG: --registry-burst="10" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031965 4780 flags.go:64] FLAG: --registry-qps="5" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031969 4780 flags.go:64] FLAG: --reserved-cpus="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031975 4780 flags.go:64] FLAG: --reserved-memory="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031980 4780 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031985 4780 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031989 4780 flags.go:64] FLAG: --rotate-certificates="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031994 4780 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.031998 4780 flags.go:64] FLAG: --runonce="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032003 4780 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032007 4780 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032011 4780 flags.go:64] FLAG: --seccomp-default="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032015 4780 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032019 4780 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032024 4780 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032028 4780 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032033 4780 flags.go:64] FLAG: --storage-driver-password="root" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032037 4780 flags.go:64] FLAG: --storage-driver-secure="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032041 4780 flags.go:64] FLAG: --storage-driver-table="stats" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032045 4780 flags.go:64] FLAG: --storage-driver-user="root" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032049 4780 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032054 4780 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032058 4780 flags.go:64] FLAG: --system-cgroups="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032062 4780 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032069 4780 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032073 4780 flags.go:64] FLAG: --tls-cert-file="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032078 4780 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032083 4780 flags.go:64] FLAG: --tls-min-version="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032087 4780 flags.go:64] FLAG: --tls-private-key-file="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032092 4780 flags.go:64] FLAG: --topology-manager-policy="none" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032096 4780 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032100 4780 flags.go:64] FLAG: --topology-manager-scope="container" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032104 4780 flags.go:64] FLAG: --v="2" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032109 4780 flags.go:64] FLAG: --version="false" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032114 4780 flags.go:64] FLAG: --vmodule="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032119 4780 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032124 4780 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032220 4780 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032224 4780 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032237 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032241 4780 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032245 4780 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032249 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032253 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032257 4780 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032260 4780 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032268 4780 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032271 4780 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032275 4780 feature_gate.go:330] unrecognized feature gate: Example Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032279 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032282 4780 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032286 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032291 4780 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032295 4780 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032299 4780 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032304 4780 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032308 4780 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032313 4780 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032318 4780 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032322 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032326 4780 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032330 4780 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032334 4780 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032338 4780 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032342 4780 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032347 4780 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032350 4780 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032354 4780 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032358 4780 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032362 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032365 4780 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032369 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032373 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032376 4780 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032380 4780 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032384 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032389 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032393 4780 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032398 4780 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032402 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032406 4780 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032410 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032414 4780 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032418 4780 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032422 4780 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032425 4780 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032429 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032433 4780 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032436 4780 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032440 4780 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032445 4780 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032449 4780 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032453 4780 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032456 4780 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032460 4780 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032464 4780 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032467 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032471 4780 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032474 4780 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032478 4780 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032482 4780 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032485 4780 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032489 4780 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032493 4780 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032497 4780 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032500 4780 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032504 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.032509 4780 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.032515 4780 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.038602 4780 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.038650 4780 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038735 4780 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038750 4780 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038755 4780 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038761 4780 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038769 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038774 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038780 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038785 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038789 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038794 4780 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038799 4780 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038803 4780 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038809 4780 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038813 4780 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038817 4780 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038821 4780 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038826 4780 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038830 4780 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038834 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038838 4780 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038843 4780 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038847 4780 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038851 4780 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038856 4780 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038861 4780 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038866 4780 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038871 4780 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038893 4780 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038898 4780 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038903 4780 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038908 4780 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038913 4780 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038917 4780 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038922 4780 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038928 4780 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038933 4780 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038937 4780 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038942 4780 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038946 4780 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038951 4780 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038955 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038959 4780 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038964 4780 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038969 4780 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038973 4780 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038978 4780 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038982 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038987 4780 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.038993 4780 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039001 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039006 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039011 4780 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039017 4780 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039023 4780 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039028 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039035 4780 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039043 4780 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039048 4780 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039053 4780 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039058 4780 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039063 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039068 4780 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039072 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039077 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039082 4780 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039087 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039092 4780 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039096 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039101 4780 feature_gate.go:330] unrecognized feature gate: Example Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039106 4780 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039111 4780 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.039119 4780 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039271 4780 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039279 4780 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039286 4780 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039294 4780 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039299 4780 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039304 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039308 4780 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039313 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039318 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039322 4780 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039326 4780 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039331 4780 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039335 4780 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039339 4780 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039344 4780 feature_gate.go:330] unrecognized feature gate: Example Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039348 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039352 4780 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039357 4780 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039363 4780 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039368 4780 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039372 4780 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039377 4780 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039381 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039386 4780 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039390 4780 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039395 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039399 4780 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039403 4780 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039407 4780 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039412 4780 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039416 4780 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039420 4780 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039425 4780 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039429 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039435 4780 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039439 4780 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039443 4780 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039447 4780 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039452 4780 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039456 4780 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039460 4780 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039464 4780 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039468 4780 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039473 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039477 4780 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039481 4780 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039485 4780 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039489 4780 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039493 4780 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039498 4780 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039502 4780 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039506 4780 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039511 4780 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039516 4780 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039522 4780 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039528 4780 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039533 4780 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039538 4780 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039542 4780 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039547 4780 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039551 4780 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039556 4780 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039560 4780 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039564 4780 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039568 4780 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039574 4780 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039579 4780 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039584 4780 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039590 4780 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039595 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.039601 4780 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.039608 4780 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.039802 4780 server.go:940] "Client rotation is on, will bootstrap in background" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.042598 4780 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.042695 4780 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.043271 4780 server.go:997] "Starting client certificate rotation" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.043302 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.043594 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-21 02:59:27.333352581 +0000 UTC Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.043679 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.048057 4780 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.049165 4780 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.050593 4780 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.058164 4780 log.go:25] "Validated CRI v1 runtime API" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.070293 4780 log.go:25] "Validated CRI v1 image API" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.071563 4780 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.073531 4780 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-05-06-41-27-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.073560 4780 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.086302 4780 manager.go:217] Machine: {Timestamp:2025-12-05 06:46:06.085280542 +0000 UTC m=+0.154796884 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c3fe25e1-e381-4010-89fb-f17e2c9cc29f BootID:d3e37a7b-0024-4e60-b062-627e3948945a Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b8:05:db Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b8:05:db Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:6b:6b:4e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e8:d5:11 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fd:50:0d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d4:a2:d8 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:b9:8d:80 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d6:e5:eb:29:e8:ee Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3e:a0:aa:ae:17:c4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.086492 4780 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.086586 4780 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.087001 4780 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.087151 4780 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.087182 4780 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.087352 4780 topology_manager.go:138] "Creating topology manager with none policy" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.087362 4780 container_manager_linux.go:303] "Creating device plugin manager" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.087508 4780 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.087542 4780 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.087780 4780 state_mem.go:36] "Initialized new in-memory state store" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.087893 4780 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.088379 4780 kubelet.go:418] "Attempting to sync node with API server" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.088398 4780 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.088497 4780 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.088511 4780 kubelet.go:324] "Adding apiserver pod source" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.088522 4780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.090062 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.090069 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.090147 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.090148 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.090643 4780 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.090938 4780 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.091564 4780 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092053 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092075 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092082 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092088 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092099 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092105 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092111 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092141 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092148 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092156 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092174 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092181 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092197 4780 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092638 4780 server.go:1280] "Started kubelet" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092725 4780 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092867 4780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.092868 4780 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.093935 4780 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 05 06:46:06 crc systemd[1]: Started Kubernetes Kubelet. Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.097677 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.097708 4780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.098106 4780 server.go:460] "Adding debug handlers to kubelet server" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.098179 4780 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.098190 4780 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.098193 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:39:20.772503328 +0000 UTC Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.099105 4780 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.100940 4780 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.101092 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.101159 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.101227 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.101670 4780 factory.go:153] Registering CRI-O factory Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.101705 4780 factory.go:221] Registration of the crio container factory successfully Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.101766 4780 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.101775 4780 factory.go:55] Registering systemd factory Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.101784 4780 factory.go:221] Registration of the systemd container factory successfully Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.101801 4780 factory.go:103] Registering Raw factory Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.101817 4780 manager.go:1196] Started watching for new ooms in manager Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.101432 4780 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e3ecfc6f07312 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 06:46:06.09261237 +0000 UTC m=+0.162128702,LastTimestamp:2025-12-05 06:46:06.09261237 +0000 UTC m=+0.162128702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.103181 4780 manager.go:319] Starting recovery of all containers Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107134 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107165 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107175 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107186 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107196 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107204 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107213 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107221 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107231 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107239 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107248 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107257 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107283 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107292 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107301 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107325 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107350 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107359 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107368 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107376 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107385 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107394 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107402 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107411 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107422 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107432 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107445 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107455 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107465 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107474 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107483 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107493 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107502 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107512 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107521 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107531 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107540 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107549 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107559 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107569 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107578 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107589 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107598 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107608 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107616 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107627 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107636 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107645 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107654 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107665 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107676 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107689 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107705 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107715 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107726 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107735 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107746 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107755 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107765 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107774 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107783 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107793 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107801 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107811 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107841 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107850 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107862 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107873 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107895 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107904 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107914 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107923 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107932 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107942 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107952 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107961 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107970 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107979 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107988 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.107998 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108008 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108016 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108025 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108037 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108046 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108055 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108064 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108073 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108082 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108091 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108100 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108110 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108119 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108128 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108137 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108147 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108157 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108166 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108177 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108186 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108195 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108205 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108216 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108226 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108240 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108251 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108261 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108272 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108283 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108293 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108302 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108317 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108327 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108336 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108346 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108358 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108367 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108377 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108386 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108396 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108425 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108434 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108445 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108454 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108463 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108472 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108483 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108493 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108502 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108512 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108522 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108531 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108540 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108549 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108558 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108567 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108576 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108586 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108595 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108604 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108613 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108623 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108632 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108642 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108651 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108660 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108671 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108682 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108695 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108705 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108714 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108723 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108732 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108741 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108750 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108759 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108768 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108777 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108786 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108795 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108803 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108812 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108822 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108830 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108839 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108848 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108856 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108865 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108887 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108897 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108906 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108915 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108924 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108932 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108940 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108949 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108958 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108966 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.108976 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109511 4780 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109529 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109540 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109549 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109566 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109574 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109584 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109593 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109601 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109609 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109618 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109626 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109635 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109645 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109653 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109663 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109674 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109685 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109697 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109707 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109715 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109724 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109733 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109743 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109752 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109761 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109769 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109779 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109789 4780 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109798 4780 reconstruct.go:97] "Volume reconstruction finished" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.109805 4780 reconciler.go:26] "Reconciler: start to sync state" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.125322 4780 manager.go:324] Recovery completed Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.133272 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.134520 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.134553 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.134561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.136004 4780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.136818 4780 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.136844 4780 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.136868 4780 state_mem.go:36] "Initialized new in-memory state store" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.137454 4780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.137490 4780 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.137511 4780 kubelet.go:2335] "Starting kubelet main sync loop" Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.137551 4780 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.138734 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.138775 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.145316 4780 policy_none.go:49] "None policy: Start" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.146206 4780 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.146233 4780 state_mem.go:35] "Initializing new in-memory state store" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.189004 4780 manager.go:334] "Starting Device Plugin manager" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.189074 4780 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.189087 4780 server.go:79] "Starting device plugin registration server" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.189551 4780 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.189566 4780 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.189754 4780 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.189867 4780 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.189898 4780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.195460 4780 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.238614 4780 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.238700 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.240108 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.240144 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.240154 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.240317 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.240609 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.240670 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.241106 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.241139 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.241158 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.241222 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.241374 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.241413 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.241770 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.241864 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.241929 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.241804 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.242061 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.242094 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.242261 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.242286 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.242308 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.242319 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.242479 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.242505 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.244842 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.244910 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.244933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.246072 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.246520 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.247185 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.247489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.247521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.247535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.249368 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.249447 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.249468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.249852 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.249966 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.250179 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.250208 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.250217 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.251446 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.251505 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.251525 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.289846 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.290815 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.290850 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.290862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.291086 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.291559 4780 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.302374 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.312633 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.312673 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.312696 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.312717 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.312760 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.312790 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.312811 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.312841 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.312949 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.313006 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.313059 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.313127 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.313174 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.313203 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.313229 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414378 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414432 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414453 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414472 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414489 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414506 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414521 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414537 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414569 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414642 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414562 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414654 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414686 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414698 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414686 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414714 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414668 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414793 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414828 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414849 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414868 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414904 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414918 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414938 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414933 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414976 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414955 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414978 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.414862 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.415001 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.492680 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.495006 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.495078 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.495101 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.495149 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.495951 4780 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.578712 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.584045 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.598199 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.611592 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.612939 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c827ac0d3eebab31f5c2172f1cbac05790c66bb4af606c7047e718728707575b WatchSource:0}: Error finding container c827ac0d3eebab31f5c2172f1cbac05790c66bb4af606c7047e718728707575b: Status 404 returned error can't find the container with id c827ac0d3eebab31f5c2172f1cbac05790c66bb4af606c7047e718728707575b Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.615049 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-03ab359200713bd3232bd6124c97fea6df49bb0cc703f99f5f02ab332f41d08b WatchSource:0}: Error finding container 03ab359200713bd3232bd6124c97fea6df49bb0cc703f99f5f02ab332f41d08b: Status 404 returned error can't find the container with id 03ab359200713bd3232bd6124c97fea6df49bb0cc703f99f5f02ab332f41d08b Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.617286 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.624168 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1490111b142d7d642879d727eefb64695221069c3e1a3668fdcad8eb83252bfc WatchSource:0}: Error finding container 1490111b142d7d642879d727eefb64695221069c3e1a3668fdcad8eb83252bfc: Status 404 returned error can't find the container with id 1490111b142d7d642879d727eefb64695221069c3e1a3668fdcad8eb83252bfc Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.627740 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-dfb9866d9185422dda4d93457130f538bc31fbd553027b9f86bef3338497db3d WatchSource:0}: Error finding container dfb9866d9185422dda4d93457130f538bc31fbd553027b9f86bef3338497db3d: Status 404 returned error can't find the container with id dfb9866d9185422dda4d93457130f538bc31fbd553027b9f86bef3338497db3d Dec 05 06:46:06 crc kubenswrapper[4780]: W1205 06:46:06.634859 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-369f1748e2eddf632f74167a20fde6f6863c62784cfea523752eba27c6335977 WatchSource:0}: Error finding container 369f1748e2eddf632f74167a20fde6f6863c62784cfea523752eba27c6335977: Status 404 returned error can't find the container with id 369f1748e2eddf632f74167a20fde6f6863c62784cfea523752eba27c6335977 Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.703975 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.896705 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.898056 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.898121 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.898140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:06 crc kubenswrapper[4780]: I1205 06:46:06.898180 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 06:46:06 crc kubenswrapper[4780]: E1205 06:46:06.898744 4780 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.094271 4780 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.099309 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:28:34.190991382 +0000 UTC Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.099383 4780 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 115h42m27.091610699s for next certificate rotation Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.142719 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a" exitCode=0 Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.142783 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a"} Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.142860 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c827ac0d3eebab31f5c2172f1cbac05790c66bb4af606c7047e718728707575b"} Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.142961 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.144867 4780 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="46b84f5438ded9ce5acd4ba779b3babcbe54b249a625f68e86ee52404d5abffc" exitCode=0 Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.144908 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"46b84f5438ded9ce5acd4ba779b3babcbe54b249a625f68e86ee52404d5abffc"} Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.144923 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.144939 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"369f1748e2eddf632f74167a20fde6f6863c62784cfea523752eba27c6335977"} Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.144961 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.145007 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.145036 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.145643 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.145669 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.145678 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.146307 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.146786 4780 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2" exitCode=0 Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.146824 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2"} Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.146899 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dfb9866d9185422dda4d93457130f538bc31fbd553027b9f86bef3338497db3d"} Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.146921 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.146939 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.146949 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.146951 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.147535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.147561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.147573 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.148068 4780 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184" exitCode=0 Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.148086 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184"} Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.148105 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1490111b142d7d642879d727eefb64695221069c3e1a3668fdcad8eb83252bfc"} Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.148176 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.149056 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.149086 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.149095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.149868 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399"} Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.149922 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"03ab359200713bd3232bd6124c97fea6df49bb0cc703f99f5f02ab332f41d08b"} Dec 05 06:46:07 crc kubenswrapper[4780]: W1205 06:46:07.240328 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 05 06:46:07 crc kubenswrapper[4780]: E1205 06:46:07.240414 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 05 06:46:07 crc kubenswrapper[4780]: W1205 06:46:07.481675 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 05 06:46:07 crc kubenswrapper[4780]: E1205 06:46:07.481737 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 05 06:46:07 crc kubenswrapper[4780]: E1205 06:46:07.504547 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Dec 05 06:46:07 crc kubenswrapper[4780]: W1205 06:46:07.553710 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 05 06:46:07 crc kubenswrapper[4780]: E1205 06:46:07.553795 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 05 06:46:07 crc kubenswrapper[4780]: W1205 06:46:07.576727 4780 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 05 06:46:07 crc kubenswrapper[4780]: E1205 06:46:07.576835 4780 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.699168 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.700541 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.700594 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.700608 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:07 crc kubenswrapper[4780]: I1205 06:46:07.700640 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.154870 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.154939 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.154944 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.154953 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.156047 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.156085 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.156094 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.157769 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.157810 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.157823 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.157832 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.157842 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.157948 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.158621 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.158655 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.158669 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.159699 4780 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3b0167b1f0592be15afa3675d839c49ebb15305cced3aea7646fc711068c0e24" exitCode=0 Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.159723 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3b0167b1f0592be15afa3675d839c49ebb15305cced3aea7646fc711068c0e24"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.159784 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.160719 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.160768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.160783 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.164106 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"724618240abd2923cb38c092c4f80ae54fcc94755bfd818464bbfbad0453dd21"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.164185 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.164980 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.164999 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.165008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.166822 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e7f1a185cbe6ba08c0082100bf91075124f9e38c5c7c091503fbaabaf0eddb9b"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.166848 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5dad796f10dca1d6d7c0139938654ccfa863096e6339864267e2076659112801"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.166862 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3a2fe83b4cc33e6bc60f7fef9c625482cdc2157217840e07773635ecbf2a7801"} Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.166950 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.167501 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.167521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.167529 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.204832 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.363375 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:08 crc kubenswrapper[4780]: I1205 06:46:08.575179 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.172535 4780 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5a5ddba5d3bc0980086e05f098da16b31e8c732b7b8bbff1726f3cfa9d836d3b" exitCode=0 Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.172665 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.172714 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.172762 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.173383 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.173402 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.173619 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.173621 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5a5ddba5d3bc0980086e05f098da16b31e8c732b7b8bbff1726f3cfa9d836d3b"} Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.173697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.173727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.173745 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174522 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174538 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174546 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174557 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174575 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174622 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174639 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174650 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174652 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:09 crc kubenswrapper[4780]: I1205 06:46:09.174671 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:10 crc kubenswrapper[4780]: I1205 06:46:10.177269 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:10 crc kubenswrapper[4780]: I1205 06:46:10.177960 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"49595adaf935e63ab9b77e5b905a316a878fb58fff45b4b3906375b6e7747817"} Dec 05 06:46:10 crc kubenswrapper[4780]: I1205 06:46:10.178055 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f26d999387148e2f5f7017300874d06a9bf34a57d0403422c3931be1969d3cea"} Dec 05 06:46:10 crc kubenswrapper[4780]: I1205 06:46:10.178070 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e366a1a0d84682064cec2d5ad3843909f170ea6f49e742234f9ba560df31e426"} Dec 05 06:46:10 crc kubenswrapper[4780]: I1205 06:46:10.178089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"94d327c93ffe6ead12077728b8fceef841cec8dda57b1f7db628ca909f0f77f3"} Dec 05 06:46:10 crc kubenswrapper[4780]: I1205 06:46:10.178425 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:10 crc kubenswrapper[4780]: I1205 06:46:10.178448 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:10 crc kubenswrapper[4780]: I1205 06:46:10.178459 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:10 crc kubenswrapper[4780]: I1205 06:46:10.307661 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.189504 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.189465 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"78347eb32bd54337b1116ad6ac65ed273fffa245f7fc8f6954b8c3ce21c99347"} Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.189620 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.191618 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.191692 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.191726 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.191690 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.191902 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.191921 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.312002 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.312348 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.314174 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.314258 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.314285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.734585 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.735019 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.736836 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.736933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:11 crc kubenswrapper[4780]: I1205 06:46:11.736959 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.089388 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.099173 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.192633 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.192674 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.192813 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.193978 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.194054 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.194077 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.194274 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.194310 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:12 crc kubenswrapper[4780]: I1205 06:46:12.194320 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:13 crc kubenswrapper[4780]: I1205 06:46:13.001602 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 05 06:46:13 crc kubenswrapper[4780]: I1205 06:46:13.194588 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:13 crc kubenswrapper[4780]: I1205 06:46:13.194589 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:13 crc kubenswrapper[4780]: I1205 06:46:13.195840 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:13 crc kubenswrapper[4780]: I1205 06:46:13.195947 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:13 crc kubenswrapper[4780]: I1205 06:46:13.195974 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:13 crc kubenswrapper[4780]: I1205 06:46:13.197417 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:13 crc kubenswrapper[4780]: I1205 06:46:13.197471 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:13 crc kubenswrapper[4780]: I1205 06:46:13.197493 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:14 crc kubenswrapper[4780]: I1205 06:46:14.085870 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 05 06:46:14 crc kubenswrapper[4780]: I1205 06:46:14.198117 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:14 crc kubenswrapper[4780]: I1205 06:46:14.199697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:14 crc kubenswrapper[4780]: I1205 06:46:14.199784 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:14 crc kubenswrapper[4780]: I1205 06:46:14.199805 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:16 crc kubenswrapper[4780]: E1205 06:46:16.195584 4780 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 05 06:46:16 crc kubenswrapper[4780]: I1205 06:46:16.481933 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:16 crc kubenswrapper[4780]: I1205 06:46:16.482349 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:16 crc kubenswrapper[4780]: I1205 06:46:16.484235 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:16 crc kubenswrapper[4780]: I1205 06:46:16.484275 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:16 crc kubenswrapper[4780]: I1205 06:46:16.484288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:16 crc kubenswrapper[4780]: I1205 06:46:16.488789 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:17 crc kubenswrapper[4780]: I1205 06:46:17.205048 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:17 crc kubenswrapper[4780]: I1205 06:46:17.206423 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:17 crc kubenswrapper[4780]: I1205 06:46:17.206504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:17 crc kubenswrapper[4780]: I1205 06:46:17.206523 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:17 crc kubenswrapper[4780]: E1205 06:46:17.702095 4780 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 05 06:46:18 crc kubenswrapper[4780]: I1205 06:46:18.094197 4780 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 05 06:46:18 crc kubenswrapper[4780]: E1205 06:46:18.206186 4780 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 05 06:46:18 crc kubenswrapper[4780]: I1205 06:46:18.363444 4780 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 06:46:18 crc kubenswrapper[4780]: I1205 06:46:18.363545 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 06:46:18 crc kubenswrapper[4780]: E1205 06:46:18.691503 4780 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187e3ecfc6f07312 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 06:46:06.09261237 +0000 UTC m=+0.162128702,LastTimestamp:2025-12-05 06:46:06.09261237 +0000 UTC m=+0.162128702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 06:46:19 crc kubenswrapper[4780]: E1205 06:46:19.106139 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: TLS handshake timeout (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 05 06:46:19 crc kubenswrapper[4780]: I1205 06:46:19.254668 4780 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 05 06:46:19 crc kubenswrapper[4780]: I1205 06:46:19.254728 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 05 06:46:19 crc kubenswrapper[4780]: I1205 06:46:19.302689 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:19 crc kubenswrapper[4780]: I1205 06:46:19.303847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:19 crc kubenswrapper[4780]: I1205 06:46:19.303894 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:19 crc kubenswrapper[4780]: I1205 06:46:19.303907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:19 crc kubenswrapper[4780]: I1205 06:46:19.303933 4780 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 06:46:19 crc kubenswrapper[4780]: I1205 06:46:19.482286 4780 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 06:46:19 crc kubenswrapper[4780]: I1205 06:46:19.482369 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 06:46:22 crc kubenswrapper[4780]: I1205 06:46:22.390845 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 05 06:46:22 crc kubenswrapper[4780]: I1205 06:46:22.408693 4780 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 05 06:46:23 crc kubenswrapper[4780]: I1205 06:46:23.076260 4780 csr.go:261] certificate signing request csr-fbbdv is approved, waiting to be issued Dec 05 06:46:23 crc kubenswrapper[4780]: I1205 06:46:23.104215 4780 csr.go:257] certificate signing request csr-fbbdv is issued Dec 05 06:46:23 crc kubenswrapper[4780]: I1205 06:46:23.368345 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:23 crc kubenswrapper[4780]: I1205 06:46:23.368560 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:23 crc kubenswrapper[4780]: I1205 06:46:23.370119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:23 crc kubenswrapper[4780]: I1205 06:46:23.370159 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:23 crc kubenswrapper[4780]: I1205 06:46:23.370169 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:23 crc kubenswrapper[4780]: I1205 06:46:23.375973 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.105404 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-05 06:41:23 +0000 UTC, rotation deadline is 2026-08-21 21:08:11.628159786 +0000 UTC Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.105451 4780 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6230h21m47.522712652s for next certificate rotation Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.113835 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.113995 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.114971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.115010 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.115025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.128006 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.221651 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.221702 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.221666 4780 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.222705 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.222737 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.222749 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.222852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.222977 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.223007 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.249858 4780 trace.go:236] Trace[1922558604]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 06:46:10.069) (total time: 14180ms): Dec 05 06:46:24 crc kubenswrapper[4780]: Trace[1922558604]: ---"Objects listed" error: 14180ms (06:46:24.249) Dec 05 06:46:24 crc kubenswrapper[4780]: Trace[1922558604]: [14.180581556s] [14.180581556s] END Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.249904 4780 trace.go:236] Trace[333035515]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 06:46:09.602) (total time: 14647ms): Dec 05 06:46:24 crc kubenswrapper[4780]: Trace[333035515]: ---"Objects listed" error: 14647ms (06:46:24.249) Dec 05 06:46:24 crc kubenswrapper[4780]: Trace[333035515]: [14.647356918s] [14.647356918s] END Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.249921 4780 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.249908 4780 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.249972 4780 trace.go:236] Trace[1364870690]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 06:46:10.764) (total time: 13485ms): Dec 05 06:46:24 crc kubenswrapper[4780]: Trace[1364870690]: ---"Objects listed" error: 13485ms (06:46:24.249) Dec 05 06:46:24 crc kubenswrapper[4780]: Trace[1364870690]: [13.485635055s] [13.485635055s] END Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.249991 4780 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.251692 4780 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.252380 4780 trace.go:236] Trace[2008373180]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 06:46:10.160) (total time: 14092ms): Dec 05 06:46:24 crc kubenswrapper[4780]: Trace[2008373180]: ---"Objects listed" error: 14091ms (06:46:24.251) Dec 05 06:46:24 crc kubenswrapper[4780]: Trace[2008373180]: [14.092243085s] [14.092243085s] END Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.252397 4780 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.297491 4780 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55954->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.297553 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55954->192.168.126.11:17697: read: connection reset by peer" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.297563 4780 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55958->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.297643 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55958->192.168.126.11:17697: read: connection reset by peer" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.298020 4780 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.298072 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.317978 4780 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.318794 4780 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.322196 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.322234 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.322242 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.322257 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.322269 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:24 crc kubenswrapper[4780]: E1205 06:46:24.334556 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.337470 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.337597 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.337711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.337800 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.337899 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:24 crc kubenswrapper[4780]: E1205 06:46:24.347237 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.351913 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.351951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.351959 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.351974 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.351985 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:24 crc kubenswrapper[4780]: E1205 06:46:24.371243 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.374937 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.374973 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.374983 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.374996 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.375007 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:24 crc kubenswrapper[4780]: E1205 06:46:24.385318 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.388808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.388841 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.388849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.388862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.388874 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:24 crc kubenswrapper[4780]: E1205 06:46:24.408416 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:24 crc kubenswrapper[4780]: E1205 06:46:24.408595 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.410390 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.410439 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.410450 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.410468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.410487 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.512937 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.512977 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.512986 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.513001 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.513012 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.615357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.615391 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.615399 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.615411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.615422 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.717788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.717824 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.717833 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.717847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.717856 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.820044 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.820240 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.820329 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.820394 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.820459 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.923102 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.923136 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.923145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.923159 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:24 crc kubenswrapper[4780]: I1205 06:46:24.923169 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:24Z","lastTransitionTime":"2025-12-05T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.026156 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.026191 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.026199 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.026213 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.026223 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:25Z","lastTransitionTime":"2025-12-05T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.098310 4780 apiserver.go:52] "Watching apiserver" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.101972 4780 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.102387 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-frbm8","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.102874 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.102933 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.102945 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.103050 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-frbm8" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.103058 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.103175 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.103201 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.103245 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.103360 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.104814 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.106779 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.108357 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.110014 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.110295 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.110317 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.110659 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.110774 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.110908 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.110990 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.111375 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.111684 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.112021 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.125309 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.128075 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.128106 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.128119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.128135 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.128147 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:25Z","lastTransitionTime":"2025-12-05T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.135010 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.152584 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.159723 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.168110 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.176320 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.185684 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.196918 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.201793 4780 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.225414 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.227126 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce" exitCode=255 Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.227160 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.229654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.229711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.229724 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.229742 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.229753 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:25Z","lastTransitionTime":"2025-12-05T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.239509 4780 scope.go:117] "RemoveContainer" containerID="631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.240602 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.251167 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.257961 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258003 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258023 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258063 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258083 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258100 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258115 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258131 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258175 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258192 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258222 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258237 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258251 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258278 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258292 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258309 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258323 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258342 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258378 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258393 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258407 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258404 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258427 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258423 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258490 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258511 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258528 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258543 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258559 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258574 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258589 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258638 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258655 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258674 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258691 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258705 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258721 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258739 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258756 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258779 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258795 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258811 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258828 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258844 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258861 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258914 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258932 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258939 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258948 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258964 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258963 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.258984 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259001 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259017 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259040 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259055 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259071 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259085 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259104 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259121 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259119 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259162 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259177 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259192 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259207 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259223 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259225 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259237 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259271 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259288 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259304 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259318 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259334 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259349 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259364 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259378 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259392 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259410 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259425 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259440 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259458 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259473 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259490 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259506 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259522 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259725 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259740 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259755 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259840 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259856 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259871 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259903 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259922 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259937 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259951 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259980 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259997 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260012 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260029 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260043 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260059 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260090 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260107 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260122 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260139 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260154 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260169 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260185 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260200 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260216 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260233 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260248 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260264 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260279 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260295 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260309 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260324 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260340 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260356 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260379 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260393 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260408 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260424 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260441 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260458 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260474 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260489 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260505 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260520 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260537 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260568 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260590 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260614 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260639 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260655 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260675 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260700 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260723 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260740 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260756 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260772 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260792 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260807 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260837 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260853 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260870 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260902 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260922 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260938 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260955 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260970 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260988 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261005 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261021 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261037 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261053 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261073 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261105 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261129 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261152 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261174 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261191 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261206 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261223 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261239 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261269 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261296 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261317 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261334 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261352 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261368 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261384 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261404 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261420 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261438 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261454 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261471 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261488 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261505 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261522 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261539 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261555 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261573 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261588 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261604 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261627 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261645 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261663 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261679 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261697 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261714 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261730 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261747 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261770 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261795 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261814 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261834 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261850 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261867 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261957 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261992 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262011 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262035 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262054 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262076 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262096 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262117 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/79be6eea-5a91-47e1-8284-989d30c1a8b4-hosts-file\") pod \"node-resolver-frbm8\" (UID: \"79be6eea-5a91-47e1-8284-989d30c1a8b4\") " pod="openshift-dns/node-resolver-frbm8" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262132 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbwvq\" (UniqueName: \"kubernetes.io/projected/79be6eea-5a91-47e1-8284-989d30c1a8b4-kube-api-access-cbwvq\") pod \"node-resolver-frbm8\" (UID: \"79be6eea-5a91-47e1-8284-989d30c1a8b4\") " pod="openshift-dns/node-resolver-frbm8" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262172 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262189 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262206 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262225 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262244 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262261 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262277 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262295 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262353 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262365 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262375 4780 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262384 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262394 4780 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262403 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259470 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.259859 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260425 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260603 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260742 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.260908 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.298207 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261064 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261335 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261541 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.261686 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262054 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262292 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262378 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262617 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262659 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.262753 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.263025 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.263052 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.263128 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.263208 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.263354 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.263605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.263668 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.264338 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.264521 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.264832 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.265976 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.266186 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.274521 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.274871 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.275265 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.275642 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.275854 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.276196 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.276689 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.277011 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.277231 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.277238 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.277388 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.277456 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.277644 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.277728 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.277953 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.278261 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.278387 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.278495 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.278605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.278723 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.279087 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.279196 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.279304 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.279557 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.279665 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.279844 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.280064 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.280256 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.280371 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.280733 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.280849 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.282171 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.282336 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.282392 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.288526 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.289824 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.290221 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.290252 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.290486 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.290504 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.290830 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.290908 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.290928 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.290977 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.291288 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.291318 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.291542 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.291714 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.291829 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.291904 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:46:25.791867501 +0000 UTC m=+19.861383833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.291904 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.292132 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.292213 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.292265 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.292311 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.292349 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.292374 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.292485 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.292589 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.292705 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.292816 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.293036 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.293095 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.293439 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.293447 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.293475 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.294566 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.294635 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.294838 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.295953 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.295199 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.295253 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.295261 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.295388 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.298706 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.299046 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.295534 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.299075 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.295582 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.295604 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.295738 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.296054 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.296072 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.296356 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.296548 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.296910 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.296946 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.297367 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.297413 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.297597 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.297629 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.297866 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.297959 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.298047 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.298214 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.298281 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.298473 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.298510 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.298588 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.295091 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.299120 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.299129 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.299686 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.301025 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.301650 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.301817 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.301833 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.301835 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.302107 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.302180 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.302329 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.302559 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.302604 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.302627 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.302872 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.303134 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.303421 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.303493 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:25.803474834 +0000 UTC m=+19.872991166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.303649 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.303680 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.303938 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.304452 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.304554 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.304684 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.305037 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.305197 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.305264 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.305279 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.305404 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.305556 4780 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.305695 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.305839 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.306002 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:25.805985494 +0000 UTC m=+19.875501826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.306130 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.310439 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.318711 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.318925 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320179 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320243 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320245 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320280 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320265 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320293 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320653 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320746 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320750 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320843 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.320972 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.321214 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.324121 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.325111 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.326085 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.326117 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.326299 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.326391 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:25.82637071 +0000 UTC m=+19.895887042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.326570 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.326756 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.327752 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.328974 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.329163 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.329301 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.329625 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.329643 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.329655 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.329692 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:25.829679063 +0000 UTC m=+19.899195395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.331685 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.332874 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.334244 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.334439 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.334498 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.334656 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.334680 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.335347 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.335498 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.335519 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.335527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.335541 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.335549 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:25Z","lastTransitionTime":"2025-12-05T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.336200 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.336525 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.336623 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.337928 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.339042 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.340202 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.343738 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.344176 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.349576 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.352500 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.354130 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363080 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/79be6eea-5a91-47e1-8284-989d30c1a8b4-hosts-file\") pod \"node-resolver-frbm8\" (UID: \"79be6eea-5a91-47e1-8284-989d30c1a8b4\") " pod="openshift-dns/node-resolver-frbm8" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363342 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbwvq\" (UniqueName: \"kubernetes.io/projected/79be6eea-5a91-47e1-8284-989d30c1a8b4-kube-api-access-cbwvq\") pod \"node-resolver-frbm8\" (UID: \"79be6eea-5a91-47e1-8284-989d30c1a8b4\") " pod="openshift-dns/node-resolver-frbm8" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363473 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363567 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363670 4780 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363729 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363800 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363862 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363961 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.364048 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.364103 4780 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.364202 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363664 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.364280 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365050 4780 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365066 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363742 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.363202 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/79be6eea-5a91-47e1-8284-989d30c1a8b4-hosts-file\") pod \"node-resolver-frbm8\" (UID: \"79be6eea-5a91-47e1-8284-989d30c1a8b4\") " pod="openshift-dns/node-resolver-frbm8" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.364626 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365480 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365497 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365507 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365546 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365556 4780 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365566 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365574 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365584 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365592 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365600 4780 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365611 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365622 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365630 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365638 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365647 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365655 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365665 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365673 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365681 4780 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365689 4780 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365697 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365705 4780 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365714 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365722 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365731 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365740 4780 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365748 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365756 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365763 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365773 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365782 4780 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365789 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365797 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365805 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365814 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365822 4780 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365832 4780 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365855 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365865 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365873 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365894 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365902 4780 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365911 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365920 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365928 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365958 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365967 4780 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365975 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365983 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.365992 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366003 4780 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366043 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366053 4780 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366061 4780 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366069 4780 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366077 4780 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366084 4780 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366094 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366102 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366110 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366120 4780 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366129 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366137 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366146 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366182 4780 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366191 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366199 4780 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366207 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366215 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366223 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366230 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366238 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366247 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366255 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366280 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366288 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366295 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366303 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366312 4780 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366319 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366361 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366370 4780 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366378 4780 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366386 4780 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366394 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366402 4780 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366410 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366418 4780 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366426 4780 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366434 4780 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366442 4780 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366450 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366475 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366514 4780 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366525 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366533 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366541 4780 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366549 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366556 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366565 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366573 4780 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366581 4780 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366589 4780 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366596 4780 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366604 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366612 4780 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366620 4780 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366628 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366636 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366649 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366657 4780 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366688 4780 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366696 4780 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366704 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366712 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366720 4780 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366746 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366754 4780 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366762 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366789 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.366980 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367006 4780 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367020 4780 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367041 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367054 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367067 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367079 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367091 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367105 4780 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367120 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367132 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367145 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367157 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367169 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367183 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367200 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367211 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367223 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367235 4780 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367246 4780 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367257 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367268 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367279 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367292 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367303 4780 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367315 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367327 4780 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367339 4780 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367350 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367362 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367376 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367389 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367402 4780 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367414 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367426 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367438 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367449 4780 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367460 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367471 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367483 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367497 4780 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367512 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367524 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367535 4780 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367547 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367561 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367573 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367586 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367597 4780 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367609 4780 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367619 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367631 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367643 4780 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367654 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367665 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367676 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.367689 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.375692 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.378561 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbwvq\" (UniqueName: \"kubernetes.io/projected/79be6eea-5a91-47e1-8284-989d30c1a8b4-kube-api-access-cbwvq\") pod \"node-resolver-frbm8\" (UID: \"79be6eea-5a91-47e1-8284-989d30c1a8b4\") " pod="openshift-dns/node-resolver-frbm8" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.387151 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.418180 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.425056 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.433178 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.438801 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-frbm8" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.441239 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.441272 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.441283 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.441299 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.441311 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:25Z","lastTransitionTime":"2025-12-05T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: W1205 06:46:25.441679 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-8d9d4786d7cc5c9f02ace04f51306385ad328aecafb5a2a67e6b576e1e8a233c WatchSource:0}: Error finding container 8d9d4786d7cc5c9f02ace04f51306385ad328aecafb5a2a67e6b576e1e8a233c: Status 404 returned error can't find the container with id 8d9d4786d7cc5c9f02ace04f51306385ad328aecafb5a2a67e6b576e1e8a233c Dec 05 06:46:25 crc kubenswrapper[4780]: W1205 06:46:25.443845 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-bc2dedaab14964aa5b80c45a155b222874e361a0d0b1aa597b54e03716c28843 WatchSource:0}: Error finding container bc2dedaab14964aa5b80c45a155b222874e361a0d0b1aa597b54e03716c28843: Status 404 returned error can't find the container with id bc2dedaab14964aa5b80c45a155b222874e361a0d0b1aa597b54e03716c28843 Dec 05 06:46:25 crc kubenswrapper[4780]: W1205 06:46:25.453771 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79be6eea_5a91_47e1_8284_989d30c1a8b4.slice/crio-d83150daea79362b246ab404b50051f088ad83adadeb1f9d32e69d6dfea2c6b9 WatchSource:0}: Error finding container d83150daea79362b246ab404b50051f088ad83adadeb1f9d32e69d6dfea2c6b9: Status 404 returned error can't find the container with id d83150daea79362b246ab404b50051f088ad83adadeb1f9d32e69d6dfea2c6b9 Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.548458 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.548760 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.548771 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.548787 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.548798 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:25Z","lastTransitionTime":"2025-12-05T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.651813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.651853 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.651864 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.651895 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.651908 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:25Z","lastTransitionTime":"2025-12-05T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.753971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.754003 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.754011 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.754025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.754033 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:25Z","lastTransitionTime":"2025-12-05T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.856205 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.856254 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.856268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.856285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.856297 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:25Z","lastTransitionTime":"2025-12-05T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.858709 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mjftd"] Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.859126 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.862875 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.863945 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bwf64"] Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.864223 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.865001 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-scs78"] Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.865529 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.865813 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.866103 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.866304 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.866480 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.868682 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.868968 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.869126 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.869476 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.869628 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.870818 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.871014 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.877205 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.877317 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.877223 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878095 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.877319 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878155 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878165 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.877805 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878253 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878260 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.878286 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.878320 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878385 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878416 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:26.878403664 +0000 UTC m=+20.947919996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878437 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:46:26.878426035 +0000 UTC m=+20.947942367 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878447 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:26.878442605 +0000 UTC m=+20.947958937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878463 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:26.878454245 +0000 UTC m=+20.947970577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:25 crc kubenswrapper[4780]: E1205 06:46:25.878472 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:26.878467986 +0000 UTC m=+20.947984308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.880762 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:25Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.896470 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:25Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.911398 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:25Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.923312 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:25Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.938788 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:25Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.956682 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:25Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.958462 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.958494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.958503 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.958518 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.958528 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:25Z","lastTransitionTime":"2025-12-05T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.972490 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:25Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.978583 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-var-lib-kubelet\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.978617 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-proxy-tls\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.978635 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-hostroot\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.978658 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-multus-conf-dir\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.978673 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.978764 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d78jd\" (UniqueName: \"kubernetes.io/projected/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-kube-api-access-d78jd\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.978933 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-system-cni-dir\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.978993 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/74991823-72ec-4b41-bb63-e92307688c30-cni-binary-copy\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979025 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-run-netns\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979060 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-system-cni-dir\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979088 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-os-release\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979115 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-rootfs\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979137 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-etc-kubernetes\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979161 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9nwb\" (UniqueName: \"kubernetes.io/projected/74991823-72ec-4b41-bb63-e92307688c30-kube-api-access-v9nwb\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979185 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-cnibin\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979211 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-var-lib-cni-bin\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979232 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-cni-binary-copy\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979257 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-multus-cni-dir\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979315 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk6sn\" (UniqueName: \"kubernetes.io/projected/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-kube-api-access-xk6sn\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979345 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-mcd-auth-proxy-config\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979392 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-cnibin\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979414 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-run-k8s-cni-cncf-io\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979435 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-var-lib-cni-multus\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979465 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-os-release\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979516 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/74991823-72ec-4b41-bb63-e92307688c30-multus-daemon-config\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979587 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979621 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-multus-socket-dir-parent\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.979645 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-run-multus-certs\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:25 crc kubenswrapper[4780]: I1205 06:46:25.987045 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:25Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.000965 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:25Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.013590 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.027021 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.043217 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.044189 4780 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044384 4780 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044430 4780 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044457 4780 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044495 4780 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044539 4780 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044566 4780 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044751 4780 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044776 4780 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044802 4780 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044822 4780 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044843 4780 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044789 4780 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044855 4780 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044899 4780 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044845 4780 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044862 4780 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044824 4780 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044953 4780 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044747 4780 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044977 4780 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044539 4780 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.044995 4780 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.045010 4780 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.045027 4780 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.060199 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.060235 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.060244 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.060259 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.060269 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:26Z","lastTransitionTime":"2025-12-05T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080565 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-cnibin\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080599 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-run-k8s-cni-cncf-io\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080614 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-var-lib-cni-multus\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080628 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-os-release\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080643 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/74991823-72ec-4b41-bb63-e92307688c30-multus-daemon-config\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080671 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-multus-socket-dir-parent\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080685 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080704 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-run-multus-certs\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080719 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-proxy-tls\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080732 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-var-lib-kubelet\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080732 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-run-k8s-cni-cncf-io\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080771 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-run-multus-certs\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080749 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-hostroot\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080753 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-var-lib-cni-multus\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080782 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-os-release\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080828 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-multus-conf-dir\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080853 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080901 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d78jd\" (UniqueName: \"kubernetes.io/projected/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-kube-api-access-d78jd\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080928 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-system-cni-dir\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080948 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/74991823-72ec-4b41-bb63-e92307688c30-cni-binary-copy\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080968 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-run-netns\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081004 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-os-release\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081026 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-system-cni-dir\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080731 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-cnibin\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081054 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-rootfs\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081051 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-var-lib-kubelet\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081077 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-etc-kubernetes\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080966 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-multus-socket-dir-parent\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081106 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-etc-kubernetes\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081118 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9nwb\" (UniqueName: \"kubernetes.io/projected/74991823-72ec-4b41-bb63-e92307688c30-kube-api-access-v9nwb\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081139 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-cnibin\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081157 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-var-lib-cni-bin\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081168 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-os-release\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081177 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-cni-binary-copy\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081160 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-system-cni-dir\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081199 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-multus-cni-dir\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081218 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk6sn\" (UniqueName: \"kubernetes.io/projected/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-kube-api-access-xk6sn\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081217 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-system-cni-dir\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081255 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-mcd-auth-proxy-config\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081280 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-run-netns\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081258 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-rootfs\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.080771 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-hostroot\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081368 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-multus-conf-dir\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081459 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-multus-cni-dir\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081596 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-cnibin\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081620 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081623 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/74991823-72ec-4b41-bb63-e92307688c30-multus-daemon-config\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081653 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74991823-72ec-4b41-bb63-e92307688c30-host-var-lib-cni-bin\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081776 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/74991823-72ec-4b41-bb63-e92307688c30-cni-binary-copy\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.081912 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-cni-binary-copy\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.082005 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-mcd-auth-proxy-config\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.084190 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-proxy-tls\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.115622 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9nwb\" (UniqueName: \"kubernetes.io/projected/74991823-72ec-4b41-bb63-e92307688c30-kube-api-access-v9nwb\") pod \"multus-bwf64\" (UID: \"74991823-72ec-4b41-bb63-e92307688c30\") " pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.119577 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk6sn\" (UniqueName: \"kubernetes.io/projected/cce73093-dc28-44a3-b6b6-e153e0f4d1ff-kube-api-access-xk6sn\") pod \"multus-additional-cni-plugins-scs78\" (UID: \"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\") " pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.121866 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d78jd\" (UniqueName: \"kubernetes.io/projected/a640087b-e493-4ac1-bef1-a9c05dd7c0ad-kube-api-access-d78jd\") pod \"machine-config-daemon-mjftd\" (UID: \"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\") " pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.138011 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.138183 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.142609 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.143495 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.144998 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.145746 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.146791 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.147289 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.147821 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.148696 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.149318 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.150191 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.150660 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.151920 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.152438 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.153007 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.154355 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.154859 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.155435 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.156059 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.156639 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.157243 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.157853 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.159660 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.160078 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.161189 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.161706 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.161735 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.161744 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.161758 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.161766 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:26Z","lastTransitionTime":"2025-12-05T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.161846 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.162972 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.163560 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.164397 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.165016 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.165801 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.166240 4780 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.166351 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.167947 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.168771 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.169161 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.170687 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.172794 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.173331 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.174434 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.175096 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.175938 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.176497 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.177408 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.177981 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.179081 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.179722 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.180577 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.181271 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.182073 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.182511 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.188417 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.189058 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.189238 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.189625 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.190570 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.201434 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bwf64" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.209899 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-scs78" Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.222507 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74991823_72ec_4b41_bb63_e92307688c30.slice/crio-7bb7c2800002bbb14012f4507a99fa94fcb4e80b381dce72e6a19f7128d3c7e4 WatchSource:0}: Error finding container 7bb7c2800002bbb14012f4507a99fa94fcb4e80b381dce72e6a19f7128d3c7e4: Status 404 returned error can't find the container with id 7bb7c2800002bbb14012f4507a99fa94fcb4e80b381dce72e6a19f7128d3c7e4 Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.230610 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6bb9d3cee3e17906cef0845ad5266274ee6cf183a195856599b8382bbb76e0cc"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.232561 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.232588 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8d9d4786d7cc5c9f02ace04f51306385ad328aecafb5a2a67e6b576e1e8a233c"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.235480 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lf5cd"] Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.238292 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcce73093_dc28_44a3_b6b6_e153e0f4d1ff.slice/crio-ec5dfdc066775b42c11c3e945b2116fed8448911fe3a3976841965a1c386c5d5 WatchSource:0}: Error finding container ec5dfdc066775b42c11c3e945b2116fed8448911fe3a3976841965a1c386c5d5: Status 404 returned error can't find the container with id ec5dfdc066775b42c11c3e945b2116fed8448911fe3a3976841965a1c386c5d5 Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.238729 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.238796 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.238807 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bc2dedaab14964aa5b80c45a155b222874e361a0d0b1aa597b54e03716c28843"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.239296 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.240357 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bwf64" event={"ID":"74991823-72ec-4b41-bb63-e92307688c30","Type":"ContainerStarted","Data":"7bb7c2800002bbb14012f4507a99fa94fcb4e80b381dce72e6a19f7128d3c7e4"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.241479 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.241527 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.241550 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.241637 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.241821 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.241902 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.241986 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.249527 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"84bd4a5c61c2963c7c340972d08dbe6a640eb24bf7f207382a6e8daa15029529"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.250587 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-frbm8" event={"ID":"79be6eea-5a91-47e1-8284-989d30c1a8b4","Type":"ContainerStarted","Data":"e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.250606 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-frbm8" event={"ID":"79be6eea-5a91-47e1-8284-989d30c1a8b4","Type":"ContainerStarted","Data":"d83150daea79362b246ab404b50051f088ad83adadeb1f9d32e69d6dfea2c6b9"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.253344 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.254659 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.255102 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.267513 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.267555 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.267572 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.267592 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.267604 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:26Z","lastTransitionTime":"2025-12-05T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.369504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.369537 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.369547 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.369560 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.369568 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:26Z","lastTransitionTime":"2025-12-05T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384379 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-script-lib\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384663 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-systemd-units\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384708 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-slash\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384733 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384754 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-openvswitch\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384769 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-ovn\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384786 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-kubelet\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384802 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-log-socket\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384818 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-bin\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384833 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-netd\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384849 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-config\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.384865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61c4a70b-17c4-4f09-a541-5161825c4c03-ovn-node-metrics-cert\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.385040 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-var-lib-openvswitch\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.385111 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-node-log\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.385153 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-etc-openvswitch\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.385175 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-env-overrides\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.385198 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-systemd\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.385240 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-ovn-kubernetes\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.385268 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-netns\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.385292 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrsmk\" (UniqueName: \"kubernetes.io/projected/61c4a70b-17c4-4f09-a541-5161825c4c03-kube-api-access-qrsmk\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.471779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.471827 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.471842 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.471862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.471891 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:26Z","lastTransitionTime":"2025-12-05T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486591 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486634 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-openvswitch\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486657 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-ovn\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486671 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-log-socket\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486685 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-bin\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486699 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-netd\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486712 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-config\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486729 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-kubelet\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486743 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61c4a70b-17c4-4f09-a541-5161825c4c03-ovn-node-metrics-cert\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486746 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-openvswitch\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486785 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-netd\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486749 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486791 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-var-lib-openvswitch\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486759 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-var-lib-openvswitch\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486860 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-bin\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486828 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-ovn\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486917 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-kubelet\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486941 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-log-socket\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.486978 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-node-log\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487043 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-node-log\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487063 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-etc-openvswitch\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487091 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-env-overrides\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487113 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-etc-openvswitch\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487441 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-config\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487558 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-env-overrides\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487622 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487646 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-systemd\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487630 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-systemd\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487678 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-ovn-kubernetes\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487693 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrsmk\" (UniqueName: \"kubernetes.io/projected/61c4a70b-17c4-4f09-a541-5161825c4c03-kube-api-access-qrsmk\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487713 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-netns\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487735 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-script-lib\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487753 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-systemd-units\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487770 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-slash\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487811 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-slash\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.487831 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-netns\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.488005 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-ovn-kubernetes\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.488039 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-systemd-units\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.488262 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-script-lib\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.492969 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61c4a70b-17c4-4f09-a541-5161825c4c03-ovn-node-metrics-cert\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.499032 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.500726 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.505266 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrsmk\" (UniqueName: \"kubernetes.io/projected/61c4a70b-17c4-4f09-a541-5161825c4c03-kube-api-access-qrsmk\") pod \"ovnkube-node-lf5cd\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.575263 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.575296 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.575305 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.575318 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.575328 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:26Z","lastTransitionTime":"2025-12-05T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.628984 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:26 crc kubenswrapper[4780]: W1205 06:46:26.640205 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61c4a70b_17c4_4f09_a541_5161825c4c03.slice/crio-5643df53fea9a8073141ce9ccf15df7fd98a6f854d13400afaed824f42b9c215 WatchSource:0}: Error finding container 5643df53fea9a8073141ce9ccf15df7fd98a6f854d13400afaed824f42b9c215: Status 404 returned error can't find the container with id 5643df53fea9a8073141ce9ccf15df7fd98a6f854d13400afaed824f42b9c215 Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.677242 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.677272 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.677283 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.677298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.677309 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:26Z","lastTransitionTime":"2025-12-05T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.780617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.780642 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.780652 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.780666 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.780677 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:26Z","lastTransitionTime":"2025-12-05T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.870434 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.883104 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.883133 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.883144 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.883159 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.883169 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:26Z","lastTransitionTime":"2025-12-05T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.891271 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.891361 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.891388 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.891411 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.891431 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891519 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891542 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891541 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:46:28.89151568 +0000 UTC m=+22.961032012 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891604 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:28.891596742 +0000 UTC m=+22.961113064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891618 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:28.891611283 +0000 UTC m=+22.961127615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891574 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891648 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891660 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891669 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891696 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:28.891690835 +0000 UTC m=+22.961207167 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891703 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891719 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:26 crc kubenswrapper[4780]: E1205 06:46:26.891782 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:28.891761417 +0000 UTC m=+22.961277749 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.929001 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.937349 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.951798 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.961598 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.968915 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.986332 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.986390 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.986398 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.986412 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:26 crc kubenswrapper[4780]: I1205 06:46:26.986422 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:26Z","lastTransitionTime":"2025-12-05T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.058462 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.070098 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.083131 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.088689 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.088721 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.088731 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.088745 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.088755 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:27Z","lastTransitionTime":"2025-12-05T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.088762 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.094167 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.096440 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.105770 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.106941 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.119917 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.131224 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.131363 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.132816 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.133077 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.137913 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.137959 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:27 crc kubenswrapper[4780]: E1205 06:46:27.137996 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:27 crc kubenswrapper[4780]: E1205 06:46:27.138050 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.142534 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.154473 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.168184 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.168214 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.179454 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.181039 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.183499 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.191009 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.191057 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.191069 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.191100 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.191113 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:27Z","lastTransitionTime":"2025-12-05T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.201065 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.211904 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.214973 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.222219 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.227562 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.229557 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.240833 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.254189 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.258751 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bwf64" event={"ID":"74991823-72ec-4b41-bb63-e92307688c30","Type":"ContainerStarted","Data":"13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.261437 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.261490 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.262573 4780 generic.go:334] "Generic (PLEG): container finished" podID="cce73093-dc28-44a3-b6b6-e153e0f4d1ff" containerID="05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9" exitCode=0 Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.262622 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" event={"ID":"cce73093-dc28-44a3-b6b6-e153e0f4d1ff","Type":"ContainerDied","Data":"05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.262639 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" event={"ID":"cce73093-dc28-44a3-b6b6-e153e0f4d1ff","Type":"ContainerStarted","Data":"ec5dfdc066775b42c11c3e945b2116fed8448911fe3a3976841965a1c386c5d5"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.265205 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a" exitCode=0 Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.265803 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.265826 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"5643df53fea9a8073141ce9ccf15df7fd98a6f854d13400afaed824f42b9c215"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.272921 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.282762 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.284820 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.300570 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.302181 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.302227 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.302237 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.302253 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.302264 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:27Z","lastTransitionTime":"2025-12-05T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.311390 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.323233 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.328645 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.331742 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.340341 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.350572 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.352024 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.364229 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.377007 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.389183 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.401670 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.410117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.410154 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.410165 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.410180 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.410190 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:27Z","lastTransitionTime":"2025-12-05T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.418135 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.434380 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.449801 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.454626 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.462504 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.472142 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.485069 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.503852 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.512457 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.512493 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.512504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.512520 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.512531 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:27Z","lastTransitionTime":"2025-12-05T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.516366 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.523290 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.533778 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.553537 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.567217 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.584043 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.601243 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.611542 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.615283 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.615315 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.615324 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.615337 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.615346 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:27Z","lastTransitionTime":"2025-12-05T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.624112 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.643536 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.658197 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.670640 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.682659 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.695280 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.718235 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.718306 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.718326 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.718356 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.718374 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:27Z","lastTransitionTime":"2025-12-05T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.728617 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-j76x7"] Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.728988 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.731036 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.731057 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.731842 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.732102 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.747736 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.760392 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.771986 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.783480 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.795712 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.803981 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f487d7de-9cce-457c-9dfa-09dfb392dde1-serviceca\") pod \"node-ca-j76x7\" (UID: \"f487d7de-9cce-457c-9dfa-09dfb392dde1\") " pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.804038 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f487d7de-9cce-457c-9dfa-09dfb392dde1-host\") pod \"node-ca-j76x7\" (UID: \"f487d7de-9cce-457c-9dfa-09dfb392dde1\") " pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.804093 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdrmb\" (UniqueName: \"kubernetes.io/projected/f487d7de-9cce-457c-9dfa-09dfb392dde1-kube-api-access-wdrmb\") pod \"node-ca-j76x7\" (UID: \"f487d7de-9cce-457c-9dfa-09dfb392dde1\") " pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.807694 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.820494 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.821247 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.821285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.821297 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.821314 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.821326 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:27Z","lastTransitionTime":"2025-12-05T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.849349 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.893221 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.904954 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdrmb\" (UniqueName: \"kubernetes.io/projected/f487d7de-9cce-457c-9dfa-09dfb392dde1-kube-api-access-wdrmb\") pod \"node-ca-j76x7\" (UID: \"f487d7de-9cce-457c-9dfa-09dfb392dde1\") " pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.905042 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f487d7de-9cce-457c-9dfa-09dfb392dde1-serviceca\") pod \"node-ca-j76x7\" (UID: \"f487d7de-9cce-457c-9dfa-09dfb392dde1\") " pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.905064 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f487d7de-9cce-457c-9dfa-09dfb392dde1-host\") pod \"node-ca-j76x7\" (UID: \"f487d7de-9cce-457c-9dfa-09dfb392dde1\") " pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.905229 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f487d7de-9cce-457c-9dfa-09dfb392dde1-host\") pod \"node-ca-j76x7\" (UID: \"f487d7de-9cce-457c-9dfa-09dfb392dde1\") " pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.905937 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f487d7de-9cce-457c-9dfa-09dfb392dde1-serviceca\") pod \"node-ca-j76x7\" (UID: \"f487d7de-9cce-457c-9dfa-09dfb392dde1\") " pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.923444 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.923483 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.923495 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.923508 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.923516 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:27Z","lastTransitionTime":"2025-12-05T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.930190 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.956275 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdrmb\" (UniqueName: \"kubernetes.io/projected/f487d7de-9cce-457c-9dfa-09dfb392dde1-kube-api-access-wdrmb\") pod \"node-ca-j76x7\" (UID: \"f487d7de-9cce-457c-9dfa-09dfb392dde1\") " pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:27 crc kubenswrapper[4780]: I1205 06:46:27.988685 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:27Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.026972 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.027011 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.027022 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.027046 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.027056 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:28Z","lastTransitionTime":"2025-12-05T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.028926 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.054635 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-j76x7" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.069456 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.114803 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.130073 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.130108 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.130117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.130130 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.130140 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:28Z","lastTransitionTime":"2025-12-05T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.138395 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.138499 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.231606 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.231639 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.231651 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.231668 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.231681 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:28Z","lastTransitionTime":"2025-12-05T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.270109 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-j76x7" event={"ID":"f487d7de-9cce-457c-9dfa-09dfb392dde1","Type":"ContainerStarted","Data":"f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.270173 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-j76x7" event={"ID":"f487d7de-9cce-457c-9dfa-09dfb392dde1","Type":"ContainerStarted","Data":"0ace4f1aa5987034cdb5f55876dd945cefffe97a06780cfb220762da63fe9713"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.272820 4780 generic.go:334] "Generic (PLEG): container finished" podID="cce73093-dc28-44a3-b6b6-e153e0f4d1ff" containerID="26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a" exitCode=0 Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.272903 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" event={"ID":"cce73093-dc28-44a3-b6b6-e153e0f4d1ff","Type":"ContainerDied","Data":"26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.274249 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.278501 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.278565 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.278576 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.278586 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.278597 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.278605 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.286634 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.299914 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.310864 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.324284 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.334057 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.337228 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.337259 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.337269 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.337281 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.337292 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:28Z","lastTransitionTime":"2025-12-05T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.349359 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.388511 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.448393 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.448463 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.448477 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.448502 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.448517 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:28Z","lastTransitionTime":"2025-12-05T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.454452 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.484041 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.513775 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.548645 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.551044 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.551071 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.551079 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.551091 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.551100 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:28Z","lastTransitionTime":"2025-12-05T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.587164 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.633194 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.653605 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.653648 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.653660 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.653678 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.653690 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:28Z","lastTransitionTime":"2025-12-05T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.683668 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.709609 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.751299 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.756112 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.756171 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.756184 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.756205 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.756218 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:28Z","lastTransitionTime":"2025-12-05T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.789029 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.829965 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.857938 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.857968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.857977 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.857991 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.858001 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:28Z","lastTransitionTime":"2025-12-05T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.873564 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.912137 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.916184 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.916291 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916308 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:46:32.916285982 +0000 UTC m=+26.985802314 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.916345 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.916405 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916428 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.916441 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916449 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916535 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916537 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916555 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916567 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916570 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916481 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916580 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:32.91657033 +0000 UTC m=+26.986086752 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916630 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:32.916614021 +0000 UTC m=+26.986130433 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916657 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:32.916650102 +0000 UTC m=+26.986166554 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:28 crc kubenswrapper[4780]: E1205 06:46:28.916675 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:32.916665434 +0000 UTC m=+26.986181876 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.947339 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.960062 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.960107 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.960124 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.960140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.960153 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:28Z","lastTransitionTime":"2025-12-05T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:28 crc kubenswrapper[4780]: I1205 06:46:28.993212 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:28Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.030361 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.062992 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.063038 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.063051 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.063077 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.063091 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:29Z","lastTransitionTime":"2025-12-05T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.068283 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.109969 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.138543 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.138579 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:29 crc kubenswrapper[4780]: E1205 06:46:29.138693 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:29 crc kubenswrapper[4780]: E1205 06:46:29.138781 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.147922 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.164932 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.164961 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.164970 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.164984 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.164995 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:29Z","lastTransitionTime":"2025-12-05T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.191582 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.226253 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.267129 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.267178 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.267192 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.267207 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.267216 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:29Z","lastTransitionTime":"2025-12-05T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.282355 4780 generic.go:334] "Generic (PLEG): container finished" podID="cce73093-dc28-44a3-b6b6-e153e0f4d1ff" containerID="ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8" exitCode=0 Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.282402 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" event={"ID":"cce73093-dc28-44a3-b6b6-e153e0f4d1ff","Type":"ContainerDied","Data":"ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8"} Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.300980 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.318327 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.351190 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.370156 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.370212 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.370223 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.370240 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.370251 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:29Z","lastTransitionTime":"2025-12-05T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.388523 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.430478 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.471987 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.473911 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.473955 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.473966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.473984 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.473994 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:29Z","lastTransitionTime":"2025-12-05T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.507302 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.550274 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.577257 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.577309 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.577323 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.577342 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.577354 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:29Z","lastTransitionTime":"2025-12-05T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.592811 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.626960 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.668741 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.679821 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.679925 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.679945 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.679972 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.679989 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:29Z","lastTransitionTime":"2025-12-05T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.709809 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.750216 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.783496 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.783586 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.783604 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.783635 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.783656 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:29Z","lastTransitionTime":"2025-12-05T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.791862 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:29Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.886410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.886459 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.886469 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.886489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.886504 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:29Z","lastTransitionTime":"2025-12-05T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.988728 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.988769 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.988783 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.988811 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:29 crc kubenswrapper[4780]: I1205 06:46:29.988824 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:29Z","lastTransitionTime":"2025-12-05T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.091068 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.091113 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.091142 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.091161 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.091172 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:30Z","lastTransitionTime":"2025-12-05T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.137839 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:30 crc kubenswrapper[4780]: E1205 06:46:30.137979 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.193599 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.193653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.193667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.193688 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.193702 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:30Z","lastTransitionTime":"2025-12-05T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.289339 4780 generic.go:334] "Generic (PLEG): container finished" podID="cce73093-dc28-44a3-b6b6-e153e0f4d1ff" containerID="aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7" exitCode=0 Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.289458 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" event={"ID":"cce73093-dc28-44a3-b6b6-e153e0f4d1ff","Type":"ContainerDied","Data":"aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.295402 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.295431 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.295445 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.295463 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.295478 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:30Z","lastTransitionTime":"2025-12-05T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.296227 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.320046 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.344928 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.357410 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.373257 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.384103 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.399168 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.399220 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.399229 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.399243 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.399253 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:30Z","lastTransitionTime":"2025-12-05T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.399698 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.413893 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.426636 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.436973 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.475090 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.487709 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.500847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.500902 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.500912 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.500927 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.500936 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:30Z","lastTransitionTime":"2025-12-05T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.502567 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.514690 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.522796 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:30Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.604756 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.604801 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.604809 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.604827 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.604837 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:30Z","lastTransitionTime":"2025-12-05T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.706500 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.706535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.706550 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.706569 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.706580 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:30Z","lastTransitionTime":"2025-12-05T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.809160 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.809191 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.809200 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.809212 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.809220 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:30Z","lastTransitionTime":"2025-12-05T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.911557 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.911592 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.911602 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.911616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:30 crc kubenswrapper[4780]: I1205 06:46:30.911626 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:30Z","lastTransitionTime":"2025-12-05T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.013961 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.014017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.014031 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.014053 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.014066 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:31Z","lastTransitionTime":"2025-12-05T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.117214 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.117257 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.117269 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.117285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.117296 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:31Z","lastTransitionTime":"2025-12-05T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.138535 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.138531 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:31 crc kubenswrapper[4780]: E1205 06:46:31.138915 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:31 crc kubenswrapper[4780]: E1205 06:46:31.139140 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.219594 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.219644 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.219653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.219667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.219679 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:31Z","lastTransitionTime":"2025-12-05T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.302301 4780 generic.go:334] "Generic (PLEG): container finished" podID="cce73093-dc28-44a3-b6b6-e153e0f4d1ff" containerID="62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77" exitCode=0 Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.302357 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" event={"ID":"cce73093-dc28-44a3-b6b6-e153e0f4d1ff","Type":"ContainerDied","Data":"62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.313916 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.322489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.322520 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.322529 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.322543 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.322552 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:31Z","lastTransitionTime":"2025-12-05T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.325376 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.343103 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.354732 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.365221 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.376171 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.390708 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.405530 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.415085 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.425142 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.425295 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.425392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.425489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.425584 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:31Z","lastTransitionTime":"2025-12-05T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.426539 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.438138 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.449892 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.461961 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.475028 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:31Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.528485 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.528515 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.528523 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.528536 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.528545 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:31Z","lastTransitionTime":"2025-12-05T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.631052 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.631111 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.631124 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.631138 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.631149 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:31Z","lastTransitionTime":"2025-12-05T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.733787 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.733830 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.733841 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.733856 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.733867 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:31Z","lastTransitionTime":"2025-12-05T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.836333 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.836377 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.836386 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.836403 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.836412 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:31Z","lastTransitionTime":"2025-12-05T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.938633 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.938667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.938675 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.938688 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:31 crc kubenswrapper[4780]: I1205 06:46:31.938698 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:31Z","lastTransitionTime":"2025-12-05T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.041388 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.041422 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.041430 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.041443 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.041451 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:32Z","lastTransitionTime":"2025-12-05T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.139228 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.139326 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.143153 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.143217 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.143228 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.143238 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.143246 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:32Z","lastTransitionTime":"2025-12-05T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.245708 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.245739 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.245747 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.245773 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.245783 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:32Z","lastTransitionTime":"2025-12-05T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.310246 4780 generic.go:334] "Generic (PLEG): container finished" podID="cce73093-dc28-44a3-b6b6-e153e0f4d1ff" containerID="a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313" exitCode=0 Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.310291 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" event={"ID":"cce73093-dc28-44a3-b6b6-e153e0f4d1ff","Type":"ContainerDied","Data":"a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.329540 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.348744 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.348826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.348858 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.348870 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.348901 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.348913 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:32Z","lastTransitionTime":"2025-12-05T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.361034 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.379912 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.391294 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.402371 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.416621 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.426463 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.438736 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.450609 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.450640 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.450648 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.450680 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.450690 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:32Z","lastTransitionTime":"2025-12-05T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.452890 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.463933 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.477892 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.498029 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.511730 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:32Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.552586 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.552613 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.552624 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.552642 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.552653 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:32Z","lastTransitionTime":"2025-12-05T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.655533 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.655584 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.655595 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.655609 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.655619 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:32Z","lastTransitionTime":"2025-12-05T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.759253 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.759325 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.759338 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.759357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.759370 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:32Z","lastTransitionTime":"2025-12-05T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.862264 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.862377 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.862402 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.862435 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.862456 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:32Z","lastTransitionTime":"2025-12-05T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.955672 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.955948 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.956001 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.956040 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.956126 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.956172 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.956170 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.956186 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.956124 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:46:40.956041125 +0000 UTC m=+35.025557497 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.956944 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.957342 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.957388 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:40.957091494 +0000 UTC m=+35.026607846 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.957537 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:40.957411343 +0000 UTC m=+35.026927675 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.957739 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:40.957547558 +0000 UTC m=+35.027063890 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.960219 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.960285 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.960310 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:32 crc kubenswrapper[4780]: E1205 06:46:32.960428 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:40.960387147 +0000 UTC m=+35.029903489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.965765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.965815 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.965825 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.965841 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:32 crc kubenswrapper[4780]: I1205 06:46:32.965851 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:32Z","lastTransitionTime":"2025-12-05T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.068140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.068451 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.068566 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.068663 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.068744 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:33Z","lastTransitionTime":"2025-12-05T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.138173 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:33 crc kubenswrapper[4780]: E1205 06:46:33.138298 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.138322 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:33 crc kubenswrapper[4780]: E1205 06:46:33.138473 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.213841 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.213889 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.213899 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.213913 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.213924 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:33Z","lastTransitionTime":"2025-12-05T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.315498 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.315539 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.315548 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.315564 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.315573 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:33Z","lastTransitionTime":"2025-12-05T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.317509 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" event={"ID":"cce73093-dc28-44a3-b6b6-e153e0f4d1ff","Type":"ContainerStarted","Data":"686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.321642 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.322171 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.322195 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.334707 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.342230 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.345649 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.351499 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.364228 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.377093 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.388769 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.400045 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.412174 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.417463 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.417512 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.417525 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.417548 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.417560 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:33Z","lastTransitionTime":"2025-12-05T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.429779 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.441660 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.452028 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.462855 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.477798 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.491378 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.500967 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.512362 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.520850 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.521055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.521460 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.521558 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.521641 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:33Z","lastTransitionTime":"2025-12-05T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.522213 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.536448 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.557296 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.567767 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.590002 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.621789 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.623698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.623728 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.623738 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.623754 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.623764 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:33Z","lastTransitionTime":"2025-12-05T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.643070 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.652486 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.665625 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.676042 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.687555 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.697600 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.708318 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:33Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.725997 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.726017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.726025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.726037 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.726045 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:33Z","lastTransitionTime":"2025-12-05T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.828589 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.828627 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.828637 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.828650 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.828659 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:33Z","lastTransitionTime":"2025-12-05T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.930611 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.930646 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.930654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.930667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:33 crc kubenswrapper[4780]: I1205 06:46:33.930677 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:33Z","lastTransitionTime":"2025-12-05T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.032952 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.032996 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.033005 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.033021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.033029 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.135047 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.135080 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.135091 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.135106 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.135117 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.140130 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:34 crc kubenswrapper[4780]: E1205 06:46:34.140226 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.238156 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.238459 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.238601 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.238736 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.238861 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.323985 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.341349 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.341420 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.341431 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.341444 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.341453 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.443461 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.443813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.443903 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.443967 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.444030 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.448391 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.448435 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.448445 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.448459 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.448469 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: E1205 06:46:34.460596 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:34Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.463711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.463750 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.463761 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.463775 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.463784 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: E1205 06:46:34.475426 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:34Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.479117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.479241 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.479481 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.479707 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.479807 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: E1205 06:46:34.491012 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:34Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.494778 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.494808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.494817 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.494829 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.494839 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: E1205 06:46:34.505971 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:34Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.511264 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.511544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.511572 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.511759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.511810 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: E1205 06:46:34.535956 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:34Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:34 crc kubenswrapper[4780]: E1205 06:46:34.536441 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.546845 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.547060 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.547122 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.547182 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.547235 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.649063 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.649114 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.649129 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.649148 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.649160 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.751178 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.751473 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.751538 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.751618 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.751735 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.854082 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.854128 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.854138 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.854155 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.854166 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.956730 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.956765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.956773 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.956786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:34 crc kubenswrapper[4780]: I1205 06:46:34.956795 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:34Z","lastTransitionTime":"2025-12-05T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.059164 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.059229 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.059239 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.059302 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.059313 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:35Z","lastTransitionTime":"2025-12-05T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.138727 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.138766 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:35 crc kubenswrapper[4780]: E1205 06:46:35.139082 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:35 crc kubenswrapper[4780]: E1205 06:46:35.139259 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.161431 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.161454 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.161488 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.161502 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.161511 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:35Z","lastTransitionTime":"2025-12-05T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.263234 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.263278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.263287 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.263303 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.263314 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:35Z","lastTransitionTime":"2025-12-05T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.330903 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/0.log" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.333444 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6" exitCode=1 Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.333474 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.334148 4780 scope.go:117] "RemoveContainer" containerID="a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.347167 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.360510 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.368439 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.368480 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.368489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.368504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.368514 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:35Z","lastTransitionTime":"2025-12-05T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.375964 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.393418 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.410022 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.424733 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.437100 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.458446 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:35Z\\\",\\\"message\\\":\\\"tworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 06:46:34.613753 6064 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 06:46:34.613800 6064 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 06:46:34.614174 6064 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 06:46:34.614693 6064 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 06:46:34.614718 6064 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 06:46:34.614731 6064 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 06:46:34.614736 6064 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 06:46:34.614770 6064 factory.go:656] Stopping watch factory\\\\nI1205 06:46:34.614784 6064 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 06:46:34.614796 6064 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.470686 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.470742 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.470753 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.470768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.470805 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:35Z","lastTransitionTime":"2025-12-05T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.474903 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.486133 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.497640 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.511726 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.524855 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.537091 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:35Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.573292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.573338 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.573349 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.573375 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.573387 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:35Z","lastTransitionTime":"2025-12-05T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.676305 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.676345 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.676361 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.676384 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.676396 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:35Z","lastTransitionTime":"2025-12-05T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.779172 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.779221 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.779238 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.779286 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.779301 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:35Z","lastTransitionTime":"2025-12-05T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.881343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.881393 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.881403 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.881420 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.881432 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:35Z","lastTransitionTime":"2025-12-05T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.985767 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.985819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.985830 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.985847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:35 crc kubenswrapper[4780]: I1205 06:46:35.985858 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:35Z","lastTransitionTime":"2025-12-05T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.088777 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.088818 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.088830 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.088847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.088859 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:36Z","lastTransitionTime":"2025-12-05T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.138130 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:36 crc kubenswrapper[4780]: E1205 06:46:36.138264 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.152610 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.177173 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:35Z\\\",\\\"message\\\":\\\"tworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 06:46:34.613753 6064 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 06:46:34.613800 6064 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 06:46:34.614174 6064 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 06:46:34.614693 6064 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 06:46:34.614718 6064 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 06:46:34.614731 6064 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 06:46:34.614736 6064 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 06:46:34.614770 6064 factory.go:656] Stopping watch factory\\\\nI1205 06:46:34.614784 6064 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 06:46:34.614796 6064 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.187520 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.191657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.191702 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.191717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.191739 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.191754 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:36Z","lastTransitionTime":"2025-12-05T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.203551 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.219509 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.234639 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.246104 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.260432 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.273961 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.287361 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.294055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.294116 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.294140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.294168 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.294190 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:36Z","lastTransitionTime":"2025-12-05T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.298511 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.313586 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.328803 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.338539 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/0.log" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.340592 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1"} Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.340739 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.346308 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.362066 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.373496 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.390337 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.396102 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.396311 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.396434 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.396554 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.396661 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:36Z","lastTransitionTime":"2025-12-05T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.420106 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:35Z\\\",\\\"message\\\":\\\"tworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 06:46:34.613753 6064 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 06:46:34.613800 6064 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 06:46:34.614174 6064 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 06:46:34.614693 6064 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 06:46:34.614718 6064 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 06:46:34.614731 6064 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 06:46:34.614736 6064 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 06:46:34.614770 6064 factory.go:656] Stopping watch factory\\\\nI1205 06:46:34.614784 6064 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 06:46:34.614796 6064 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.436938 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.457111 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.472697 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.487349 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.499211 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.499240 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.499249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.499263 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.499272 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:36Z","lastTransitionTime":"2025-12-05T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.506002 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.522175 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.538175 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.553651 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.566522 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.575935 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.601952 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.602099 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.602179 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.602293 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.602395 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:36Z","lastTransitionTime":"2025-12-05T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.708434 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.708472 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.708484 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.708501 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.708512 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:36Z","lastTransitionTime":"2025-12-05T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.811511 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.811547 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.811556 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.811571 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.811582 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:36Z","lastTransitionTime":"2025-12-05T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.914490 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.914547 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.914562 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.914583 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:36 crc kubenswrapper[4780]: I1205 06:46:36.914600 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:36Z","lastTransitionTime":"2025-12-05T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.017687 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.017771 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.017908 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.017948 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.017972 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:37Z","lastTransitionTime":"2025-12-05T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.120537 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.120608 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.120620 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.120640 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.120655 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:37Z","lastTransitionTime":"2025-12-05T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.138117 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.138234 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:37 crc kubenswrapper[4780]: E1205 06:46:37.138371 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:37 crc kubenswrapper[4780]: E1205 06:46:37.138497 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.223999 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.224035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.224046 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.224059 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.224068 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:37Z","lastTransitionTime":"2025-12-05T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.326766 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.326826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.326843 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.326866 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.326921 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:37Z","lastTransitionTime":"2025-12-05T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.346376 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/1.log" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.347152 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/0.log" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.351170 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1" exitCode=1 Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.351241 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.351337 4780 scope.go:117] "RemoveContainer" containerID="a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.351821 4780 scope.go:117] "RemoveContainer" containerID="033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1" Dec 05 06:46:37 crc kubenswrapper[4780]: E1205 06:46:37.352005 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.371397 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.387974 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.409273 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.429625 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:35Z\\\",\\\"message\\\":\\\"tworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 06:46:34.613753 6064 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 06:46:34.613800 6064 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 06:46:34.614174 6064 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 06:46:34.614693 6064 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 06:46:34.614718 6064 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 06:46:34.614731 6064 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 06:46:34.614736 6064 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 06:46:34.614770 6064 factory.go:656] Stopping watch factory\\\\nI1205 06:46:34.614784 6064 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 06:46:34.614796 6064 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:36Z\\\",\\\"message\\\":\\\"Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1205 06:46:36.062648 6196 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 06:46:36.062673 6196 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-scs78\\\\nI1205 06:46:36.062676 6196 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF1205 06:46:36.062554 6196 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.431428 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.431472 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.431482 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.431498 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.431507 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:37Z","lastTransitionTime":"2025-12-05T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.440528 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.451919 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.466234 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.479988 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.494733 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.512111 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.527468 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.535249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.535289 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.535298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.535311 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.535322 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:37Z","lastTransitionTime":"2025-12-05T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.542349 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.555162 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.564558 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.638047 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.638095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.638105 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.638120 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.638129 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:37Z","lastTransitionTime":"2025-12-05T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.740107 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.740149 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.740159 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.740174 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.740184 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:37Z","lastTransitionTime":"2025-12-05T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.824208 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql"] Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.824946 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.827264 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.828213 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.843843 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.844099 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.844145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.844154 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.844167 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.844175 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:37Z","lastTransitionTime":"2025-12-05T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.856185 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.867498 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.878588 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.889156 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.898698 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.908429 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8986ad8-ac4a-499b-bf48-363f358c1876-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.908519 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhcqr\" (UniqueName: \"kubernetes.io/projected/f8986ad8-ac4a-499b-bf48-363f358c1876-kube-api-access-dhcqr\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.908557 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8986ad8-ac4a-499b-bf48-363f358c1876-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.908580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8986ad8-ac4a-499b-bf48-363f358c1876-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.912766 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.930339 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0477145d5f479bb4ee53211d7d7360f8c6107fbcd6830ef44730e23cbe35ab6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:35Z\\\",\\\"message\\\":\\\"tworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 06:46:34.613753 6064 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 06:46:34.613800 6064 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1205 06:46:34.614174 6064 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 06:46:34.614693 6064 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 06:46:34.614718 6064 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 06:46:34.614731 6064 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 06:46:34.614736 6064 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 06:46:34.614770 6064 factory.go:656] Stopping watch factory\\\\nI1205 06:46:34.614784 6064 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 06:46:34.614796 6064 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:36Z\\\",\\\"message\\\":\\\"Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1205 06:46:36.062648 6196 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 06:46:36.062673 6196 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-scs78\\\\nI1205 06:46:36.062676 6196 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF1205 06:46:36.062554 6196 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.940753 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.946112 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.946181 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.946196 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.946212 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.946225 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:37Z","lastTransitionTime":"2025-12-05T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.950771 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.960480 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.970640 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.981041 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:37 crc kubenswrapper[4780]: I1205 06:46:37.993780 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:37Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.002954 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.009797 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8986ad8-ac4a-499b-bf48-363f358c1876-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.009893 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhcqr\" (UniqueName: \"kubernetes.io/projected/f8986ad8-ac4a-499b-bf48-363f358c1876-kube-api-access-dhcqr\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.009925 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8986ad8-ac4a-499b-bf48-363f358c1876-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.009951 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8986ad8-ac4a-499b-bf48-363f358c1876-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.010799 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8986ad8-ac4a-499b-bf48-363f358c1876-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.011074 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8986ad8-ac4a-499b-bf48-363f358c1876-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.016504 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8986ad8-ac4a-499b-bf48-363f358c1876-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.023923 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhcqr\" (UniqueName: \"kubernetes.io/projected/f8986ad8-ac4a-499b-bf48-363f358c1876-kube-api-access-dhcqr\") pod \"ovnkube-control-plane-749d76644c-tdkql\" (UID: \"f8986ad8-ac4a-499b-bf48-363f358c1876\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.048013 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.048043 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.048051 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.048081 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.048089 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:38Z","lastTransitionTime":"2025-12-05T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.137051 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.138180 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:38 crc kubenswrapper[4780]: E1205 06:46:38.138324 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.151456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.151603 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.151694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.151788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.151874 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:38Z","lastTransitionTime":"2025-12-05T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:38 crc kubenswrapper[4780]: W1205 06:46:38.157369 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8986ad8_ac4a_499b_bf48_363f358c1876.slice/crio-51be36bfd4805d59695d164acf1f481327bafc44f8850e657b77ceb7f1d18556 WatchSource:0}: Error finding container 51be36bfd4805d59695d164acf1f481327bafc44f8850e657b77ceb7f1d18556: Status 404 returned error can't find the container with id 51be36bfd4805d59695d164acf1f481327bafc44f8850e657b77ceb7f1d18556 Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.255082 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.255383 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.255519 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.255658 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.255852 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:38Z","lastTransitionTime":"2025-12-05T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.356518 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" event={"ID":"f8986ad8-ac4a-499b-bf48-363f358c1876","Type":"ContainerStarted","Data":"51be36bfd4805d59695d164acf1f481327bafc44f8850e657b77ceb7f1d18556"} Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.359456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.359743 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.359809 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.359834 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.359909 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:38Z","lastTransitionTime":"2025-12-05T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.360104 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/1.log" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.366373 4780 scope.go:117] "RemoveContainer" containerID="033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1" Dec 05 06:46:38 crc kubenswrapper[4780]: E1205 06:46:38.367655 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.385589 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.402039 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.418086 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.442447 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.461826 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.464863 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.464938 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.464951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.464969 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.464981 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:38Z","lastTransitionTime":"2025-12-05T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.476308 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.486224 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.502617 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:36Z\\\",\\\"message\\\":\\\"Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1205 06:46:36.062648 6196 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 06:46:36.062673 6196 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-scs78\\\\nI1205 06:46:36.062676 6196 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF1205 06:46:36.062554 6196 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.512851 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.525452 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.535199 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.545737 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.557104 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.567266 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.567302 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.567313 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.567330 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.567342 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:38Z","lastTransitionTime":"2025-12-05T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.575496 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.583784 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.590489 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.603831 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.616832 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.639775 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.652798 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.667458 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.669059 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.669122 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.669136 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.669155 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.669168 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:38Z","lastTransitionTime":"2025-12-05T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.680323 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.693023 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.706686 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.719531 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.733260 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.743222 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.754536 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.764154 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.771680 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.771716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.771724 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.771738 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.771747 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:38Z","lastTransitionTime":"2025-12-05T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.775631 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.791942 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:36Z\\\",\\\"message\\\":\\\"Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1205 06:46:36.062648 6196 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 06:46:36.062673 6196 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-scs78\\\\nI1205 06:46:36.062676 6196 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF1205 06:46:36.062554 6196 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:38Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.874972 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.875012 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.875020 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.875035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.875044 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:38Z","lastTransitionTime":"2025-12-05T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.976856 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.976905 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.976913 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.976927 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:38 crc kubenswrapper[4780]: I1205 06:46:38.976936 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:38Z","lastTransitionTime":"2025-12-05T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.078796 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.078827 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.078835 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.078848 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.078856 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:39Z","lastTransitionTime":"2025-12-05T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.137903 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.137921 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:39 crc kubenswrapper[4780]: E1205 06:46:39.138150 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:39 crc kubenswrapper[4780]: E1205 06:46:39.138258 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.180570 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.180601 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.180609 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.180621 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.180630 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:39Z","lastTransitionTime":"2025-12-05T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.282850 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.282941 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.282960 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.282983 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.283001 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:39Z","lastTransitionTime":"2025-12-05T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.290051 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-zkjck"] Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.290667 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:39 crc kubenswrapper[4780]: E1205 06:46:39.290754 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.304519 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.314252 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.325631 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.340065 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.355586 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:36Z\\\",\\\"message\\\":\\\"Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1205 06:46:36.062648 6196 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 06:46:36.062673 6196 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-scs78\\\\nI1205 06:46:36.062676 6196 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF1205 06:46:36.062554 6196 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.368182 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.369412 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" event={"ID":"f8986ad8-ac4a-499b-bf48-363f358c1876","Type":"ContainerStarted","Data":"ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.369443 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" event={"ID":"f8986ad8-ac4a-499b-bf48-363f358c1876","Type":"ContainerStarted","Data":"727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.379697 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.384420 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.384449 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.384458 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.384469 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.384477 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:39Z","lastTransitionTime":"2025-12-05T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.390929 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.402463 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.415062 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.424975 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjk8q\" (UniqueName: \"kubernetes.io/projected/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-kube-api-access-bjk8q\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.425015 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.426316 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.438731 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.451321 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.465052 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.480099 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.491038 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.492420 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.492533 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.492753 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.492836 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.493071 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:39Z","lastTransitionTime":"2025-12-05T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.505001 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.524805 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:36Z\\\",\\\"message\\\":\\\"Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1205 06:46:36.062648 6196 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 06:46:36.062673 6196 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-scs78\\\\nI1205 06:46:36.062676 6196 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF1205 06:46:36.062554 6196 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.525686 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:39 crc kubenswrapper[4780]: E1205 06:46:39.525835 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:39 crc kubenswrapper[4780]: E1205 06:46:39.525925 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs podName:c29a8f3d-4c29-4bfe-a8ab-6d28970106be nodeName:}" failed. No retries permitted until 2025-12-05 06:46:40.025907526 +0000 UTC m=+34.095423858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs") pod "network-metrics-daemon-zkjck" (UID: "c29a8f3d-4c29-4bfe-a8ab-6d28970106be") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.525854 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjk8q\" (UniqueName: \"kubernetes.io/projected/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-kube-api-access-bjk8q\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.537770 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.542260 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjk8q\" (UniqueName: \"kubernetes.io/projected/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-kube-api-access-bjk8q\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.549440 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.561372 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.578918 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.595497 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.595552 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.595568 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.595598 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.595611 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:39Z","lastTransitionTime":"2025-12-05T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.596819 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.605533 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.616270 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.629977 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.646675 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.655402 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.666589 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.680152 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.689211 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.697317 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.697345 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.697352 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.697365 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.697374 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:39Z","lastTransitionTime":"2025-12-05T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.700317 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:39Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.799330 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.799585 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.799692 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.799783 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.799873 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:39Z","lastTransitionTime":"2025-12-05T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.902309 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.902347 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.902358 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.902373 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:39 crc kubenswrapper[4780]: I1205 06:46:39.902384 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:39Z","lastTransitionTime":"2025-12-05T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.005250 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.005493 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.005576 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.005656 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.005740 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:40Z","lastTransitionTime":"2025-12-05T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.030784 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:40 crc kubenswrapper[4780]: E1205 06:46:40.030947 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:40 crc kubenswrapper[4780]: E1205 06:46:40.030999 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs podName:c29a8f3d-4c29-4bfe-a8ab-6d28970106be nodeName:}" failed. No retries permitted until 2025-12-05 06:46:41.030982252 +0000 UTC m=+35.100498594 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs") pod "network-metrics-daemon-zkjck" (UID: "c29a8f3d-4c29-4bfe-a8ab-6d28970106be") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.108562 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.109041 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.109205 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.109362 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.109511 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:40Z","lastTransitionTime":"2025-12-05T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.138164 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:40 crc kubenswrapper[4780]: E1205 06:46:40.138284 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.211829 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.212304 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.212424 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.212591 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.212747 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:40Z","lastTransitionTime":"2025-12-05T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.315777 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.315819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.315831 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.315846 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.315860 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:40Z","lastTransitionTime":"2025-12-05T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.418249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.418753 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.418864 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.418961 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.419027 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:40Z","lastTransitionTime":"2025-12-05T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.520758 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.521258 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.521363 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.521447 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.521518 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:40Z","lastTransitionTime":"2025-12-05T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.623615 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.623653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.623665 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.623683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.623695 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:40Z","lastTransitionTime":"2025-12-05T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.726780 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.727065 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.727125 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.727188 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.727247 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:40Z","lastTransitionTime":"2025-12-05T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.830314 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.830397 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.830420 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.830450 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.830474 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:40Z","lastTransitionTime":"2025-12-05T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.932276 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.932355 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.932381 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.932412 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:40 crc kubenswrapper[4780]: I1205 06:46:40.932438 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:40Z","lastTransitionTime":"2025-12-05T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.035244 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.035326 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.035345 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.035367 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.035380 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:41Z","lastTransitionTime":"2025-12-05T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.042707 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.042814 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:46:57.042796173 +0000 UTC m=+51.112312505 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.042873 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.042926 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.042968 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.042995 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.042972 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.043020 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043032 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043042 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043070 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:57.04306286 +0000 UTC m=+51.112579182 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043082 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043012 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043176 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043121 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:57.043108352 +0000 UTC m=+51.112624694 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043223 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:57.043211114 +0000 UTC m=+51.112727466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043239 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs podName:c29a8f3d-4c29-4bfe-a8ab-6d28970106be nodeName:}" failed. No retries permitted until 2025-12-05 06:46:43.043230925 +0000 UTC m=+37.112747267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs") pod "network-metrics-daemon-zkjck" (UID: "c29a8f3d-4c29-4bfe-a8ab-6d28970106be") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043245 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043287 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043309 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.043402 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 06:46:57.04337565 +0000 UTC m=+51.112892022 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.138040 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.138150 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.138179 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.138197 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.138209 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.138224 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.138235 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:41Z","lastTransitionTime":"2025-12-05T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.138039 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.138420 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.138978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:41 crc kubenswrapper[4780]: E1205 06:46:41.139298 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.240706 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.241204 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.241388 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.241571 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.241741 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:41Z","lastTransitionTime":"2025-12-05T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.344693 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.345192 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.345358 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.345506 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.345650 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:41Z","lastTransitionTime":"2025-12-05T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.448612 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.448667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.448678 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.448702 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.448714 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:41Z","lastTransitionTime":"2025-12-05T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.552168 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.552217 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.552231 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.552249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.552264 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:41Z","lastTransitionTime":"2025-12-05T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.654864 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.655020 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.655042 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.655073 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.655096 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:41Z","lastTransitionTime":"2025-12-05T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.758133 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.758177 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.758189 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.758205 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.758216 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:41Z","lastTransitionTime":"2025-12-05T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.860579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.860674 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.860683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.860697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.860707 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:41Z","lastTransitionTime":"2025-12-05T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.962494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.962535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.962545 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.962561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:41 crc kubenswrapper[4780]: I1205 06:46:41.962578 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:41Z","lastTransitionTime":"2025-12-05T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.064485 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.064521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.064529 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.064542 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.064550 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:42Z","lastTransitionTime":"2025-12-05T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.138170 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:42 crc kubenswrapper[4780]: E1205 06:46:42.138345 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.166436 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.166499 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.166521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.166548 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.166570 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:42Z","lastTransitionTime":"2025-12-05T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.269209 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.269268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.269282 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.269299 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.269310 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:42Z","lastTransitionTime":"2025-12-05T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.371108 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.371135 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.371143 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.371154 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.371162 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:42Z","lastTransitionTime":"2025-12-05T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.473649 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.473693 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.473701 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.473714 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.473724 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:42Z","lastTransitionTime":"2025-12-05T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.576143 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.576177 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.576185 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.576202 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.576212 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:42Z","lastTransitionTime":"2025-12-05T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.678868 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.678926 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.678934 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.678947 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.678958 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:42Z","lastTransitionTime":"2025-12-05T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.780813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.780847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.780856 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.780870 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.780902 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:42Z","lastTransitionTime":"2025-12-05T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.883357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.883420 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.883438 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.883464 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.883480 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:42Z","lastTransitionTime":"2025-12-05T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.987045 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.987145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.987162 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.987186 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:42 crc kubenswrapper[4780]: I1205 06:46:42.987203 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:42Z","lastTransitionTime":"2025-12-05T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.071668 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:43 crc kubenswrapper[4780]: E1205 06:46:43.071875 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:43 crc kubenswrapper[4780]: E1205 06:46:43.072044 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs podName:c29a8f3d-4c29-4bfe-a8ab-6d28970106be nodeName:}" failed. No retries permitted until 2025-12-05 06:46:47.071995509 +0000 UTC m=+41.141511881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs") pod "network-metrics-daemon-zkjck" (UID: "c29a8f3d-4c29-4bfe-a8ab-6d28970106be") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.089665 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.089721 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.089740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.089764 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.089782 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:43Z","lastTransitionTime":"2025-12-05T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.138370 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.138504 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:43 crc kubenswrapper[4780]: E1205 06:46:43.138553 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:43 crc kubenswrapper[4780]: E1205 06:46:43.138712 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.138826 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:43 crc kubenswrapper[4780]: E1205 06:46:43.138986 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.191786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.191843 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.191866 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.191910 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.191926 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:43Z","lastTransitionTime":"2025-12-05T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.294657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.294711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.294726 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.294747 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.294759 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:43Z","lastTransitionTime":"2025-12-05T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.397953 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.398023 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.398049 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.398082 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.398109 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:43Z","lastTransitionTime":"2025-12-05T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.500802 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.500925 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.500953 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.500986 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.501010 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:43Z","lastTransitionTime":"2025-12-05T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.604329 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.604377 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.604386 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.604401 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.604411 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:43Z","lastTransitionTime":"2025-12-05T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.707514 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.707547 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.707559 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.707574 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.707582 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:43Z","lastTransitionTime":"2025-12-05T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.810292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.810356 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.810368 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.810385 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.810396 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:43Z","lastTransitionTime":"2025-12-05T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.912660 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.912704 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.912713 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.912727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:43 crc kubenswrapper[4780]: I1205 06:46:43.912737 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:43Z","lastTransitionTime":"2025-12-05T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.015027 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.015061 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.015069 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.015083 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.015092 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.117623 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.117667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.117680 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.117696 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.117708 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.138633 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:44 crc kubenswrapper[4780]: E1205 06:46:44.138784 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.220582 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.220615 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.220623 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.220634 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.220642 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.323971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.324060 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.324089 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.324126 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.324154 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.433986 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.434036 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.434047 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.434065 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.434076 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.537392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.537492 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.537516 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.537551 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.537580 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.640288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.640326 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.640333 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.640345 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.640354 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.663641 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.663687 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.663698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.663717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.663730 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: E1205 06:46:44.678976 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:44Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.683080 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.683105 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.683132 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.683146 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.683156 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: E1205 06:46:44.697503 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:44Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.701698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.701737 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.701751 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.701768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.701782 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: E1205 06:46:44.718695 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:44Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.722250 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.722285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.722297 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.722313 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.722324 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: E1205 06:46:44.736278 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:44Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.739670 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.739712 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.739723 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.739738 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.739748 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: E1205 06:46:44.754428 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:44Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:44 crc kubenswrapper[4780]: E1205 06:46:44.754543 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.755758 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.755786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.755796 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.755808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.755818 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.858392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.858460 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.858478 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.858503 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.858538 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.961461 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.961515 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.961531 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.961555 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:44 crc kubenswrapper[4780]: I1205 06:46:44.961578 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:44Z","lastTransitionTime":"2025-12-05T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.064068 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.064115 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.064125 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.064139 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.064148 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:45Z","lastTransitionTime":"2025-12-05T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.138333 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.138365 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.138333 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:45 crc kubenswrapper[4780]: E1205 06:46:45.138519 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:45 crc kubenswrapper[4780]: E1205 06:46:45.138622 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:45 crc kubenswrapper[4780]: E1205 06:46:45.138699 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.167492 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.167554 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.167563 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.167576 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.167584 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:45Z","lastTransitionTime":"2025-12-05T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.270908 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.270981 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.270998 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.271022 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.271041 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:45Z","lastTransitionTime":"2025-12-05T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.373261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.373301 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.373309 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.373323 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.373332 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:45Z","lastTransitionTime":"2025-12-05T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.475712 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.475770 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.475782 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.475800 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.475811 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:45Z","lastTransitionTime":"2025-12-05T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.577827 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.577864 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.577873 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.577907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.577918 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:45Z","lastTransitionTime":"2025-12-05T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.680239 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.680273 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.680282 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.680296 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.680304 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:45Z","lastTransitionTime":"2025-12-05T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.782544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.782582 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.782591 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.782606 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.782614 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:45Z","lastTransitionTime":"2025-12-05T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.885163 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.885212 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.885223 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.885427 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.885439 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:45Z","lastTransitionTime":"2025-12-05T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.987907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.987941 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.987952 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.987968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:45 crc kubenswrapper[4780]: I1205 06:46:45.987980 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:45Z","lastTransitionTime":"2025-12-05T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.090324 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.090399 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.090410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.090424 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.090435 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:46Z","lastTransitionTime":"2025-12-05T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.138085 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:46 crc kubenswrapper[4780]: E1205 06:46:46.138210 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.154492 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.166731 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.180955 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.192297 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.192354 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.192370 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.192390 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.192406 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:46Z","lastTransitionTime":"2025-12-05T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.195799 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.213032 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.224620 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.235363 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.245945 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.258181 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.280916 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:36Z\\\",\\\"message\\\":\\\"Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1205 06:46:36.062648 6196 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 06:46:36.062673 6196 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-scs78\\\\nI1205 06:46:36.062676 6196 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF1205 06:46:36.062554 6196 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.294731 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.295152 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.295193 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.295207 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.295226 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.295237 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:46Z","lastTransitionTime":"2025-12-05T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.306037 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.315246 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.334305 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.346511 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.358295 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:46Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.397255 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.397291 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.397305 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.397321 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.397330 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:46Z","lastTransitionTime":"2025-12-05T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.499229 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.499440 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.499448 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.499462 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.499477 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:46Z","lastTransitionTime":"2025-12-05T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.604911 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.604966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.604979 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.604996 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.605009 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:46Z","lastTransitionTime":"2025-12-05T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.707438 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.707501 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.707519 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.707541 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.707557 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:46Z","lastTransitionTime":"2025-12-05T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.809526 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.809574 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.809597 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.809625 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.809646 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:46Z","lastTransitionTime":"2025-12-05T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.911957 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.911997 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.912006 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.912020 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:46 crc kubenswrapper[4780]: I1205 06:46:46.912029 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:46Z","lastTransitionTime":"2025-12-05T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.014262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.014297 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.014306 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.014320 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.014329 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:47Z","lastTransitionTime":"2025-12-05T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.111227 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:47 crc kubenswrapper[4780]: E1205 06:46:47.111353 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:47 crc kubenswrapper[4780]: E1205 06:46:47.111404 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs podName:c29a8f3d-4c29-4bfe-a8ab-6d28970106be nodeName:}" failed. No retries permitted until 2025-12-05 06:46:55.111388944 +0000 UTC m=+49.180905276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs") pod "network-metrics-daemon-zkjck" (UID: "c29a8f3d-4c29-4bfe-a8ab-6d28970106be") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.117472 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.117493 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.117502 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.117515 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.117524 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:47Z","lastTransitionTime":"2025-12-05T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.138223 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.138278 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.138362 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:47 crc kubenswrapper[4780]: E1205 06:46:47.138422 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:47 crc kubenswrapper[4780]: E1205 06:46:47.138591 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:47 crc kubenswrapper[4780]: E1205 06:46:47.138718 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.221062 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.221126 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.221144 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.221172 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.221192 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:47Z","lastTransitionTime":"2025-12-05T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.324075 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.324129 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.324140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.324158 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.324170 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:47Z","lastTransitionTime":"2025-12-05T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.428281 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.428340 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.428357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.428383 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.428400 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:47Z","lastTransitionTime":"2025-12-05T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.532448 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.532520 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.532539 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.532567 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.532587 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:47Z","lastTransitionTime":"2025-12-05T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.635833 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.635956 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.635980 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.636013 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.636035 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:47Z","lastTransitionTime":"2025-12-05T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.738902 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.738946 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.738956 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.738970 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.738980 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:47Z","lastTransitionTime":"2025-12-05T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.841246 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.841277 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.841286 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.841298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.841307 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:47Z","lastTransitionTime":"2025-12-05T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.943408 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.943457 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.943469 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.943486 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:47 crc kubenswrapper[4780]: I1205 06:46:47.943497 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:47Z","lastTransitionTime":"2025-12-05T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.045117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.045158 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.045166 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.045179 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.045188 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:48Z","lastTransitionTime":"2025-12-05T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.138742 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:48 crc kubenswrapper[4780]: E1205 06:46:48.138908 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.147846 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.147926 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.147944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.147965 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.147984 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:48Z","lastTransitionTime":"2025-12-05T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.251340 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.251391 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.251407 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.251432 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.251450 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:48Z","lastTransitionTime":"2025-12-05T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.354654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.354711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.354728 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.354752 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.354769 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:48Z","lastTransitionTime":"2025-12-05T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.457361 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.457415 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.457426 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.457442 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.457455 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:48Z","lastTransitionTime":"2025-12-05T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.559653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.559747 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.559771 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.559874 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.559936 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:48Z","lastTransitionTime":"2025-12-05T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.662380 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.662406 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.662416 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.662428 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.662438 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:48Z","lastTransitionTime":"2025-12-05T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.765419 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.765455 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.765465 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.765479 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.765491 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:48Z","lastTransitionTime":"2025-12-05T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.867848 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.867991 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.868023 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.868053 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.868076 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:48Z","lastTransitionTime":"2025-12-05T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.971189 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.971243 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.971255 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.971273 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:48 crc kubenswrapper[4780]: I1205 06:46:48.971285 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:48Z","lastTransitionTime":"2025-12-05T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.074558 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.074601 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.074616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.074636 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.074648 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:49Z","lastTransitionTime":"2025-12-05T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.138274 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:49 crc kubenswrapper[4780]: E1205 06:46:49.138409 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.138273 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.138870 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:49 crc kubenswrapper[4780]: E1205 06:46:49.139060 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:49 crc kubenswrapper[4780]: E1205 06:46:49.139267 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.177146 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.177205 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.177223 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.178113 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.178153 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:49Z","lastTransitionTime":"2025-12-05T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.281495 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.281544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.281555 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.281573 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.281584 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:49Z","lastTransitionTime":"2025-12-05T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.384684 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.384726 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.384735 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.384748 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.384757 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:49Z","lastTransitionTime":"2025-12-05T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.487024 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.487061 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.487070 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.487083 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.487092 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:49Z","lastTransitionTime":"2025-12-05T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.590005 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.590052 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.590064 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.590082 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.590093 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:49Z","lastTransitionTime":"2025-12-05T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.693697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.693751 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.693765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.693783 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.693799 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:49Z","lastTransitionTime":"2025-12-05T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.796825 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.796867 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.796876 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.796904 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.796913 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:49Z","lastTransitionTime":"2025-12-05T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.899570 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.899606 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.899615 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.899629 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:49 crc kubenswrapper[4780]: I1205 06:46:49.899637 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:49Z","lastTransitionTime":"2025-12-05T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.003357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.003436 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.003448 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.003466 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.003477 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:50Z","lastTransitionTime":"2025-12-05T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.106224 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.106279 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.106290 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.106307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.106320 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:50Z","lastTransitionTime":"2025-12-05T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.138439 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:50 crc kubenswrapper[4780]: E1205 06:46:50.138620 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.209033 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.209127 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.209141 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.209183 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.209198 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:50Z","lastTransitionTime":"2025-12-05T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.311742 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.311782 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.311793 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.311810 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.311862 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:50Z","lastTransitionTime":"2025-12-05T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.415199 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.415278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.415303 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.415336 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.415361 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:50Z","lastTransitionTime":"2025-12-05T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.518175 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.518211 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.518222 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.518236 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.518247 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:50Z","lastTransitionTime":"2025-12-05T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.621324 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.621359 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.621369 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.621384 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.621395 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:50Z","lastTransitionTime":"2025-12-05T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.723973 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.724588 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.724685 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.724776 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.724905 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:50Z","lastTransitionTime":"2025-12-05T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.827924 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.827973 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.827986 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.828004 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.828020 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:50Z","lastTransitionTime":"2025-12-05T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.930839 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.931145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.931207 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.931274 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:50 crc kubenswrapper[4780]: I1205 06:46:50.931334 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:50Z","lastTransitionTime":"2025-12-05T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.034077 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.034133 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.034150 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.034174 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.034191 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:51Z","lastTransitionTime":"2025-12-05T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.137722 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.137752 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.137724 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.137763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.137903 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.137913 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.137927 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:51Z","lastTransitionTime":"2025-12-05T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:51 crc kubenswrapper[4780]: E1205 06:46:51.137952 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:51 crc kubenswrapper[4780]: E1205 06:46:51.137857 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.137756 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:51 crc kubenswrapper[4780]: E1205 06:46:51.138016 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.241047 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.241314 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.241416 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.241509 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.241593 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:51Z","lastTransitionTime":"2025-12-05T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.343320 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.343368 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.343378 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.343391 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.343399 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:51Z","lastTransitionTime":"2025-12-05T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.445609 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.445679 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.445698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.445725 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.445742 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:51Z","lastTransitionTime":"2025-12-05T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.547977 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.548071 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.548083 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.548107 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.548121 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:51Z","lastTransitionTime":"2025-12-05T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.650648 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.650694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.650705 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.650735 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.650745 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:51Z","lastTransitionTime":"2025-12-05T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.753206 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.753248 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.753257 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.753273 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.753283 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:51Z","lastTransitionTime":"2025-12-05T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.857293 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.857343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.857357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.857372 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.857383 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:51Z","lastTransitionTime":"2025-12-05T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.959745 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.959779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.959788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.959802 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:51 crc kubenswrapper[4780]: I1205 06:46:51.959811 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:51Z","lastTransitionTime":"2025-12-05T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.062233 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.062269 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.062277 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.062291 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.062299 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:52Z","lastTransitionTime":"2025-12-05T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.138598 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:52 crc kubenswrapper[4780]: E1205 06:46:52.138754 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.139306 4780 scope.go:117] "RemoveContainer" containerID="033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.164731 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.164788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.164811 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.164838 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.164860 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:52Z","lastTransitionTime":"2025-12-05T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.267731 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.267763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.267782 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.267798 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.267810 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:52Z","lastTransitionTime":"2025-12-05T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.370289 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.370353 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.370377 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.370404 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.370425 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:52Z","lastTransitionTime":"2025-12-05T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.407301 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/1.log" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.409475 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.409609 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.428265 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.446509 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.457948 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.472144 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.472186 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.472198 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.472214 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.472226 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:52Z","lastTransitionTime":"2025-12-05T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.472665 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.485930 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.508850 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.523897 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.545439 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.558027 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.575336 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.575385 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.575398 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.575415 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.575426 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:52Z","lastTransitionTime":"2025-12-05T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.580156 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.644523 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.662343 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.671578 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.676860 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.676913 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.676922 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.676937 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.676946 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:52Z","lastTransitionTime":"2025-12-05T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.682719 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.700604 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:36Z\\\",\\\"message\\\":\\\"Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1205 06:46:36.062648 6196 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 06:46:36.062673 6196 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-scs78\\\\nI1205 06:46:36.062676 6196 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF1205 06:46:36.062554 6196 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.711760 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:52Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.778727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.778770 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.778780 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.778795 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.778806 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:52Z","lastTransitionTime":"2025-12-05T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.881406 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.881458 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.881467 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.881486 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.881497 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:52Z","lastTransitionTime":"2025-12-05T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.983464 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.983537 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.983559 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.983590 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:52 crc kubenswrapper[4780]: I1205 06:46:52.983667 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:52Z","lastTransitionTime":"2025-12-05T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.086688 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.086729 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.086740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.086756 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.086768 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:53Z","lastTransitionTime":"2025-12-05T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.138148 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.138208 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.138237 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:53 crc kubenswrapper[4780]: E1205 06:46:53.138254 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:53 crc kubenswrapper[4780]: E1205 06:46:53.138334 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:53 crc kubenswrapper[4780]: E1205 06:46:53.138429 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.189168 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.189236 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.189245 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.189260 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.189272 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:53Z","lastTransitionTime":"2025-12-05T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.291914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.291958 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.291968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.291989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.292006 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:53Z","lastTransitionTime":"2025-12-05T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.394402 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.394443 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.394453 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.394468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.394481 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:53Z","lastTransitionTime":"2025-12-05T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.412459 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/2.log" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.412916 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/1.log" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.414940 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed" exitCode=1 Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.414970 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed"} Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.415011 4780 scope.go:117] "RemoveContainer" containerID="033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.415770 4780 scope.go:117] "RemoveContainer" containerID="bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed" Dec 05 06:46:53 crc kubenswrapper[4780]: E1205 06:46:53.415976 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.429355 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.441339 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.454322 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.466007 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.478011 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.491637 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.496638 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.496666 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.496674 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.496686 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.496696 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:53Z","lastTransitionTime":"2025-12-05T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.503424 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.517373 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.534564 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.553776 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://033b89962bcf23ef0781642f637e04a9d9d16a56a33cbbcba28f50680e8a44a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:36Z\\\",\\\"message\\\":\\\"Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1205 06:46:36.062648 6196 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/control-plane-machine-set-operator]} name:Service_openshift-machine-api/control-plane-machine-set-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.41:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {589f95f7-f3e2-4140-80ed-9a0717201481}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1205 06:46:36.062673 6196 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-scs78\\\\nI1205 06:46:36.062676 6196 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nF1205 06:46:36.062554 6196 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:53Z\\\",\\\"message\\\":\\\" openshift-machine-config-operator/machine-config-daemon-mjftd openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-lf5cd]\\\\nI1205 06:46:53.012458 6430 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1205 06:46:53.012471 6430 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012479 6430 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012488 6430 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd in node crc\\\\nI1205 06:46:53.012493 6430 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd after 0 failed attempt(s)\\\\nI1205 06:46:53.012498 6430 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012510 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 06:46:53.012563 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.565243 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.584517 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.594055 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.598479 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.598509 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.598520 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.598536 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.598549 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:53Z","lastTransitionTime":"2025-12-05T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.604086 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.614190 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.625687 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:53Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.701400 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.701440 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.701450 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.701464 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.701473 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:53Z","lastTransitionTime":"2025-12-05T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.804590 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.804626 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.804636 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.804652 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.804663 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:53Z","lastTransitionTime":"2025-12-05T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.906819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.906867 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.906906 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.906951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:53 crc kubenswrapper[4780]: I1205 06:46:53.906965 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:53Z","lastTransitionTime":"2025-12-05T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.009463 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.009854 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.010080 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.010218 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.010374 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:54Z","lastTransitionTime":"2025-12-05T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.113853 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.114234 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.114516 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.114750 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.114998 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:54Z","lastTransitionTime":"2025-12-05T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.138269 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:54 crc kubenswrapper[4780]: E1205 06:46:54.138579 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.216980 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.217018 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.217027 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.217043 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.217052 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:54Z","lastTransitionTime":"2025-12-05T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.319928 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.320360 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.320378 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.320401 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.320418 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:54Z","lastTransitionTime":"2025-12-05T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.419530 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/2.log" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.421753 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.421811 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.421826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.421861 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.421872 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:54Z","lastTransitionTime":"2025-12-05T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.525495 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.525598 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.525625 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.525660 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.525684 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:54Z","lastTransitionTime":"2025-12-05T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.628968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.629012 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.629022 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.629043 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.629055 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:54Z","lastTransitionTime":"2025-12-05T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.730901 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.730946 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.730991 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.731006 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.731015 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:54Z","lastTransitionTime":"2025-12-05T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.833763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.833815 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.833828 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.833845 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.833856 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:54Z","lastTransitionTime":"2025-12-05T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.935944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.936005 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.936026 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.936051 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:54 crc kubenswrapper[4780]: I1205 06:46:54.936069 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:54Z","lastTransitionTime":"2025-12-05T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.027660 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.027700 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.027709 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.027722 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.027734 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.039632 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.045233 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.045275 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.045292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.045314 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.045327 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.058065 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.061676 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.061729 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.061741 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.061759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.061773 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.075773 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.088565 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.088621 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.088641 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.088664 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.088681 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.108108 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.112215 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.112339 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.112425 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.112507 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.112631 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.125100 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.125212 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.126839 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.126961 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.127024 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.127106 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.127185 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.138756 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.138823 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.138856 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.138762 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.139027 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.139211 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.193347 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.193506 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.193568 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs podName:c29a8f3d-4c29-4bfe-a8ab-6d28970106be nodeName:}" failed. No retries permitted until 2025-12-05 06:47:11.193549657 +0000 UTC m=+65.263065989 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs") pod "network-metrics-daemon-zkjck" (UID: "c29a8f3d-4c29-4bfe-a8ab-6d28970106be") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.229953 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.229986 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.229997 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.230013 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.230024 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.333471 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.333512 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.333526 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.333549 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.333565 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.436100 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.436146 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.436156 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.436171 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.436182 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.538777 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.538825 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.538836 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.538855 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.538868 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.574293 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.575742 4780 scope.go:117] "RemoveContainer" containerID="bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed" Dec 05 06:46:55 crc kubenswrapper[4780]: E1205 06:46:55.576049 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.588749 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.606115 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:53Z\\\",\\\"message\\\":\\\" openshift-machine-config-operator/machine-config-daemon-mjftd openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-lf5cd]\\\\nI1205 06:46:53.012458 6430 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1205 06:46:53.012471 6430 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012479 6430 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012488 6430 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd in node crc\\\\nI1205 06:46:53.012493 6430 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd after 0 failed attempt(s)\\\\nI1205 06:46:53.012498 6430 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012510 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 06:46:53.012563 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.618009 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.632267 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.641611 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.641657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.641672 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.641690 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.641704 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.648458 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.661427 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.677888 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.689485 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.704134 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.716915 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.730150 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.742158 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.743786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.743831 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.743845 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.743862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.743874 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.756394 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.767483 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.782071 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.794276 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:55Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.847494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.847555 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.847570 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.847589 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.847602 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.950972 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.951035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.951050 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.951076 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:55 crc kubenswrapper[4780]: I1205 06:46:55.951091 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:55Z","lastTransitionTime":"2025-12-05T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.054411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.055098 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.055142 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.055166 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.055178 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:56Z","lastTransitionTime":"2025-12-05T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.138839 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:56 crc kubenswrapper[4780]: E1205 06:46:56.139159 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.157983 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.158055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.158072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.158097 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.158118 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:56Z","lastTransitionTime":"2025-12-05T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.162694 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.180296 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.202924 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.217730 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.234360 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.257272 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.261717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.261765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.261775 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.261796 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.261806 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:56Z","lastTransitionTime":"2025-12-05T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.275391 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.298008 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.316399 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.334193 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.346497 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.362142 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.365256 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.365306 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.365320 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.365346 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.365366 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:56Z","lastTransitionTime":"2025-12-05T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.380655 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.396619 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.414537 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.447470 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:53Z\\\",\\\"message\\\":\\\" openshift-machine-config-operator/machine-config-daemon-mjftd openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-lf5cd]\\\\nI1205 06:46:53.012458 6430 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1205 06:46:53.012471 6430 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012479 6430 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012488 6430 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd in node crc\\\\nI1205 06:46:53.012493 6430 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd after 0 failed attempt(s)\\\\nI1205 06:46:53.012498 6430 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012510 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 06:46:53.012563 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:46:56Z is after 2025-08-24T17:21:41Z" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.468506 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.468551 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.468561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.468578 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.468592 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:56Z","lastTransitionTime":"2025-12-05T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.570595 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.570665 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.570681 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.570708 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.570726 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:56Z","lastTransitionTime":"2025-12-05T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.673397 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.673443 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.673453 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.673504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.673519 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:56Z","lastTransitionTime":"2025-12-05T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.776842 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.776968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.776988 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.777021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.777041 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:56Z","lastTransitionTime":"2025-12-05T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.881333 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.881436 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.881466 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.881503 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.881533 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:56Z","lastTransitionTime":"2025-12-05T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.984791 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.984871 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.984896 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.984965 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:56 crc kubenswrapper[4780]: I1205 06:46:56.984988 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:56Z","lastTransitionTime":"2025-12-05T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.087432 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.087512 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.087532 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.087563 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.087583 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:57Z","lastTransitionTime":"2025-12-05T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.115771 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.115849 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.115882 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.115927 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.115953 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116056 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116099 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:47:29.116086436 +0000 UTC m=+83.185602768 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116305 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:47:29.116297612 +0000 UTC m=+83.185813944 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116361 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116373 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116384 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116406 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 06:47:29.116399504 +0000 UTC m=+83.185915836 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116722 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116746 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:47:29.116739334 +0000 UTC m=+83.186255666 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116800 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116853 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116871 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.116975 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 06:47:29.116946419 +0000 UTC m=+83.186462751 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.138532 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.138629 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.138680 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.138721 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.138768 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:57 crc kubenswrapper[4780]: E1205 06:46:57.138807 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.191052 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.191119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.191130 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.191154 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.191170 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:57Z","lastTransitionTime":"2025-12-05T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.294356 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.294500 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.294530 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.294568 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.294594 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:57Z","lastTransitionTime":"2025-12-05T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.397789 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.397841 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.397854 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.397868 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.397880 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:57Z","lastTransitionTime":"2025-12-05T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.501813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.501872 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.501915 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.501940 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.501956 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:57Z","lastTransitionTime":"2025-12-05T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.605769 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.605832 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.605846 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.605873 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.605915 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:57Z","lastTransitionTime":"2025-12-05T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.709510 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.709567 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.709576 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.709600 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.709618 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:57Z","lastTransitionTime":"2025-12-05T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.812421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.812512 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.812537 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.812572 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.812599 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:57Z","lastTransitionTime":"2025-12-05T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.915486 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.915539 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.915559 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.915581 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:57 crc kubenswrapper[4780]: I1205 06:46:57.915600 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:57Z","lastTransitionTime":"2025-12-05T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.020009 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.020079 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.020108 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.020145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.020169 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:58Z","lastTransitionTime":"2025-12-05T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.124161 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.124264 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.124289 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.124324 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.124348 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:58Z","lastTransitionTime":"2025-12-05T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.138387 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:46:58 crc kubenswrapper[4780]: E1205 06:46:58.138663 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.228626 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.228696 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.228716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.228745 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.228764 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:58Z","lastTransitionTime":"2025-12-05T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.332007 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.332095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.332115 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.332148 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.332167 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:58Z","lastTransitionTime":"2025-12-05T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.435695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.435766 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.435786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.435828 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.435866 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:58Z","lastTransitionTime":"2025-12-05T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.540086 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.540139 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.540152 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.540175 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.540189 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:58Z","lastTransitionTime":"2025-12-05T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.646616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.646675 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.646692 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.646713 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.646727 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:58Z","lastTransitionTime":"2025-12-05T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.750338 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.750416 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.750435 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.750468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.750491 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:58Z","lastTransitionTime":"2025-12-05T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.853653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.853706 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.853725 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.853754 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.853776 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:58Z","lastTransitionTime":"2025-12-05T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.956710 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.956766 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.956786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.956813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:58 crc kubenswrapper[4780]: I1205 06:46:58.956836 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:58Z","lastTransitionTime":"2025-12-05T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.060360 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.060428 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.060449 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.060478 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.060499 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:59Z","lastTransitionTime":"2025-12-05T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.137863 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.137983 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.137922 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:46:59 crc kubenswrapper[4780]: E1205 06:46:59.138157 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:46:59 crc kubenswrapper[4780]: E1205 06:46:59.138260 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:46:59 crc kubenswrapper[4780]: E1205 06:46:59.138428 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.164036 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.164108 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.164126 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.164151 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.164169 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:59Z","lastTransitionTime":"2025-12-05T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.268327 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.268403 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.268451 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.268482 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.268503 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:59Z","lastTransitionTime":"2025-12-05T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.372139 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.372213 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.372236 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.372345 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.372388 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:59Z","lastTransitionTime":"2025-12-05T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.475840 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.475931 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.475949 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.475971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.475991 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:59Z","lastTransitionTime":"2025-12-05T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.579734 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.579818 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.579840 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.579869 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.579922 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:59Z","lastTransitionTime":"2025-12-05T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.683695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.683756 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.683771 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.683797 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.683813 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:59Z","lastTransitionTime":"2025-12-05T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.787278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.787346 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.787367 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.787397 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.787420 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:59Z","lastTransitionTime":"2025-12-05T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.890265 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.890324 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.890333 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.890361 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.890376 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:59Z","lastTransitionTime":"2025-12-05T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.994278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.994442 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.994468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.994545 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:46:59 crc kubenswrapper[4780]: I1205 06:46:59.994649 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:46:59Z","lastTransitionTime":"2025-12-05T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.098847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.098893 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.098926 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.098944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.098954 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:00Z","lastTransitionTime":"2025-12-05T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.138946 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:00 crc kubenswrapper[4780]: E1205 06:47:00.139134 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.202450 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.202490 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.202501 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.202517 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.202529 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:00Z","lastTransitionTime":"2025-12-05T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.306018 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.306108 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.306124 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.306147 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.306165 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:00Z","lastTransitionTime":"2025-12-05T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.409932 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.410015 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.410040 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.410074 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.410099 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:00Z","lastTransitionTime":"2025-12-05T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.513167 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.513250 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.513274 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.513307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.513329 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:00Z","lastTransitionTime":"2025-12-05T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.616463 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.616554 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.616604 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.616636 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.616662 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:00Z","lastTransitionTime":"2025-12-05T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.719509 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.719577 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.719595 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.719620 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.719638 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:00Z","lastTransitionTime":"2025-12-05T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.822686 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.822759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.822773 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.822797 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.822814 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:00Z","lastTransitionTime":"2025-12-05T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.925484 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.925537 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.925546 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.925563 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:00 crc kubenswrapper[4780]: I1205 06:47:00.925574 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:00Z","lastTransitionTime":"2025-12-05T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.030333 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.030401 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.030415 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.030436 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.030449 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:01Z","lastTransitionTime":"2025-12-05T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.133476 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.133524 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.133535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.133553 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.133570 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:01Z","lastTransitionTime":"2025-12-05T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.138878 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.138951 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.138989 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:01 crc kubenswrapper[4780]: E1205 06:47:01.139229 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:01 crc kubenswrapper[4780]: E1205 06:47:01.139364 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:01 crc kubenswrapper[4780]: E1205 06:47:01.139584 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.236411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.236477 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.236501 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.236535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.236558 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:01Z","lastTransitionTime":"2025-12-05T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.319664 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.334930 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.340228 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.340305 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.340327 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.340835 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.340915 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:01Z","lastTransitionTime":"2025-12-05T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.340909 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.365007 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.387264 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:53Z\\\",\\\"message\\\":\\\" openshift-machine-config-operator/machine-config-daemon-mjftd openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-lf5cd]\\\\nI1205 06:46:53.012458 6430 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1205 06:46:53.012471 6430 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012479 6430 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012488 6430 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd in node crc\\\\nI1205 06:46:53.012493 6430 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd after 0 failed attempt(s)\\\\nI1205 06:46:53.012498 6430 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012510 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 06:46:53.012563 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.401870 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.416723 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.436729 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.443514 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.443577 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.443588 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.443606 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.443618 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:01Z","lastTransitionTime":"2025-12-05T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.453463 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.466097 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.478261 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.489911 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.510569 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.526649 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.544355 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.546286 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.546358 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.546378 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.546407 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.546425 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:01Z","lastTransitionTime":"2025-12-05T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.557018 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.568192 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.587516 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:01Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.649256 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.649302 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.649314 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.649333 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.649346 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:01Z","lastTransitionTime":"2025-12-05T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.751261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.751308 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.751320 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.751339 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.751354 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:01Z","lastTransitionTime":"2025-12-05T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.854496 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.854556 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.854573 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.854596 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.854614 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:01Z","lastTransitionTime":"2025-12-05T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.957709 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.957752 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.957762 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.957780 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:01 crc kubenswrapper[4780]: I1205 06:47:01.957790 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:01Z","lastTransitionTime":"2025-12-05T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.061079 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.061147 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.061169 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.061201 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.061226 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:02Z","lastTransitionTime":"2025-12-05T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.138170 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:02 crc kubenswrapper[4780]: E1205 06:47:02.138382 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.163833 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.163955 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.163978 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.164006 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.164027 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:02Z","lastTransitionTime":"2025-12-05T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.267671 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.268344 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.268519 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.268710 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.268862 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:02Z","lastTransitionTime":"2025-12-05T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.374553 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.374660 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.374683 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.375552 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.375588 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:02Z","lastTransitionTime":"2025-12-05T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.479418 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.479500 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.479521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.479548 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.479571 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:02Z","lastTransitionTime":"2025-12-05T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.583425 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.584189 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.584344 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.584502 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.584644 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:02Z","lastTransitionTime":"2025-12-05T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.687779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.688330 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.688484 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.688649 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.688866 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:02Z","lastTransitionTime":"2025-12-05T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.792671 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.793042 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.793232 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.793389 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.793548 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:02Z","lastTransitionTime":"2025-12-05T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.897512 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.897601 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.897624 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.897653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:02 crc kubenswrapper[4780]: I1205 06:47:02.897689 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:02Z","lastTransitionTime":"2025-12-05T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.001325 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.001381 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.001400 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.001421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.001661 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:03Z","lastTransitionTime":"2025-12-05T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.104098 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.104183 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.104201 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.104237 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.104259 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:03Z","lastTransitionTime":"2025-12-05T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.138206 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.138296 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.138208 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:03 crc kubenswrapper[4780]: E1205 06:47:03.138427 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:03 crc kubenswrapper[4780]: E1205 06:47:03.138547 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:03 crc kubenswrapper[4780]: E1205 06:47:03.138723 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.207136 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.207212 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.207267 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.207288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.207300 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:03Z","lastTransitionTime":"2025-12-05T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.310975 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.311034 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.311051 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.311082 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.311104 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:03Z","lastTransitionTime":"2025-12-05T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.414659 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.414725 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.414738 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.414755 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.414767 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:03Z","lastTransitionTime":"2025-12-05T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.519106 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.519173 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.519186 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.519208 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.519229 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:03Z","lastTransitionTime":"2025-12-05T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.622410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.622485 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.622503 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.622533 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.622553 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:03Z","lastTransitionTime":"2025-12-05T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.727049 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.727128 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.727146 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.727175 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.727197 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:03Z","lastTransitionTime":"2025-12-05T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.830744 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.830788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.830796 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.830813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.830824 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:03Z","lastTransitionTime":"2025-12-05T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.932918 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.932959 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.932968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.932982 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:03 crc kubenswrapper[4780]: I1205 06:47:03.932991 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:03Z","lastTransitionTime":"2025-12-05T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.035193 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.035232 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.035241 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.035254 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.035265 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:04Z","lastTransitionTime":"2025-12-05T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.137618 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.137648 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.137656 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.137671 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.137690 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:04Z","lastTransitionTime":"2025-12-05T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.137748 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:04 crc kubenswrapper[4780]: E1205 06:47:04.137910 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.240385 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.240430 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.240439 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.240454 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.240464 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:04Z","lastTransitionTime":"2025-12-05T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.343625 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.343663 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.343675 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.343693 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.343703 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:04Z","lastTransitionTime":"2025-12-05T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.446826 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.446911 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.446925 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.446940 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.446952 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:04Z","lastTransitionTime":"2025-12-05T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.550019 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.550068 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.550076 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.550089 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.550099 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:04Z","lastTransitionTime":"2025-12-05T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.654139 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.654190 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.654199 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.654214 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.654224 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:04Z","lastTransitionTime":"2025-12-05T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.757608 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.757671 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.757696 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.757720 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.757738 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:04Z","lastTransitionTime":"2025-12-05T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.860682 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.860996 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.861143 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.861270 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.861366 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:04Z","lastTransitionTime":"2025-12-05T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.964727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.964795 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.964813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.964841 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:04 crc kubenswrapper[4780]: I1205 06:47:04.964858 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:04Z","lastTransitionTime":"2025-12-05T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.067725 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.068107 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.068202 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.068291 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.068368 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.138054 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:05 crc kubenswrapper[4780]: E1205 06:47:05.138689 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.138297 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:05 crc kubenswrapper[4780]: E1205 06:47:05.138918 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.138102 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:05 crc kubenswrapper[4780]: E1205 06:47:05.139112 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.170549 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.170593 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.170606 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.170625 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.170637 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.273470 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.273512 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.273521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.273535 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.273545 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.377428 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.377481 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.377494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.377516 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.377529 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.417744 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.417809 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.417827 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.417853 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.417871 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: E1205 06:47:05.434750 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:05Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.439794 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.439990 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.440159 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.440272 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.440356 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: E1205 06:47:05.455245 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:05Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.462194 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.462254 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.462279 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.462307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.462326 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: E1205 06:47:05.484071 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:05Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.488894 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.488944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.488957 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.488979 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.488993 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: E1205 06:47:05.505612 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:05Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.510554 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.510616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.510629 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.510654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.510670 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: E1205 06:47:05.532152 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:05Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:05 crc kubenswrapper[4780]: E1205 06:47:05.532315 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.534418 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.534489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.534503 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.534532 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.534552 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.638187 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.638268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.638279 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.638295 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.638309 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.741433 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.741478 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.741489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.741506 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.741517 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.844771 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.844835 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.844855 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.844930 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.844955 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.948400 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.948477 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.948498 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.948530 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:05 crc kubenswrapper[4780]: I1205 06:47:05.948549 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:05Z","lastTransitionTime":"2025-12-05T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.051209 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.051273 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.051301 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.051333 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.051356 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:06Z","lastTransitionTime":"2025-12-05T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.138400 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.139060 4780 scope.go:117] "RemoveContainer" containerID="bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed" Dec 05 06:47:06 crc kubenswrapper[4780]: E1205 06:47:06.139139 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:06 crc kubenswrapper[4780]: E1205 06:47:06.139214 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.152551 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.153846 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.153878 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.153897 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.153911 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.153921 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:06Z","lastTransitionTime":"2025-12-05T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.169926 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.178853 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.189543 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00890d8-91f0-4740-9bc5-836838906e5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a2fe83b4cc33e6bc60f7fef9c625482cdc2157217840e07773635ecbf2a7801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad796f10dca1d6d7c0139938654ccfa863096e6339864267e2076659112801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f1a185cbe6ba08c0082100bf91075124f9e38c5c7c091503fbaabaf0eddb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.199492 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.210149 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.224136 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.236462 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.250643 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.256511 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.256549 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.256561 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.256577 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.256588 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:06Z","lastTransitionTime":"2025-12-05T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.265203 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.276472 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.286411 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.298703 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.307173 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.320724 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.340157 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:53Z\\\",\\\"message\\\":\\\" openshift-machine-config-operator/machine-config-daemon-mjftd openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-lf5cd]\\\\nI1205 06:46:53.012458 6430 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1205 06:46:53.012471 6430 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012479 6430 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012488 6430 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd in node crc\\\\nI1205 06:46:53.012493 6430 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd after 0 failed attempt(s)\\\\nI1205 06:46:53.012498 6430 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012510 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 06:46:53.012563 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.351013 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:06Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.358670 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.358704 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.358717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.358736 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.358747 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:06Z","lastTransitionTime":"2025-12-05T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.460648 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.460680 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.460688 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.460701 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.460710 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:06Z","lastTransitionTime":"2025-12-05T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.562595 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.562632 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.562642 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.562656 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.562665 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:06Z","lastTransitionTime":"2025-12-05T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.665407 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.665445 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.665453 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.665468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.665477 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:06Z","lastTransitionTime":"2025-12-05T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.768137 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.768226 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.768237 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.768256 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.768267 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:06Z","lastTransitionTime":"2025-12-05T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.870656 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.870701 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.870711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.870732 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.870745 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:06Z","lastTransitionTime":"2025-12-05T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.972863 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.972950 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.972961 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.972985 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:06 crc kubenswrapper[4780]: I1205 06:47:06.972998 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:06Z","lastTransitionTime":"2025-12-05T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.076192 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.076254 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.076269 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.076291 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.076304 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:07Z","lastTransitionTime":"2025-12-05T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.138171 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.138350 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.138545 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:07 crc kubenswrapper[4780]: E1205 06:47:07.138530 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:07 crc kubenswrapper[4780]: E1205 06:47:07.138639 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:07 crc kubenswrapper[4780]: E1205 06:47:07.138826 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.179547 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.179598 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.179608 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.179701 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.179722 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:07Z","lastTransitionTime":"2025-12-05T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.282409 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.282443 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.282455 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.282469 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.282479 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:07Z","lastTransitionTime":"2025-12-05T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.385387 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.385458 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.385473 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.385496 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.385511 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:07Z","lastTransitionTime":"2025-12-05T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.487316 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.487377 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.487395 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.487419 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.487437 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:07Z","lastTransitionTime":"2025-12-05T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.589869 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.589938 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.589950 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.589969 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.589981 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:07Z","lastTransitionTime":"2025-12-05T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.692055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.692142 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.692166 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.692206 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.692231 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:07Z","lastTransitionTime":"2025-12-05T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.795635 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.795694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.795705 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.795724 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.795735 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:07Z","lastTransitionTime":"2025-12-05T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.907224 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.907337 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.907366 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.907406 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:07 crc kubenswrapper[4780]: I1205 06:47:07.907425 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:07Z","lastTransitionTime":"2025-12-05T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.009821 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.009868 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.009923 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.009943 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.009954 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:08Z","lastTransitionTime":"2025-12-05T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.113982 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.114077 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.114091 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.114108 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.114121 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:08Z","lastTransitionTime":"2025-12-05T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.138658 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:08 crc kubenswrapper[4780]: E1205 06:47:08.138917 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.217102 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.217168 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.217183 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.217209 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.217228 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:08Z","lastTransitionTime":"2025-12-05T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.321154 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.321195 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.321207 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.321226 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.321239 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:08Z","lastTransitionTime":"2025-12-05T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.424486 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.424545 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.424562 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.424586 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.424618 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:08Z","lastTransitionTime":"2025-12-05T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.527769 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.527922 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.527958 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.527990 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.528011 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:08Z","lastTransitionTime":"2025-12-05T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.632066 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.632169 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.632188 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.632217 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.632239 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:08Z","lastTransitionTime":"2025-12-05T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.736047 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.736143 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.736164 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.736200 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.736224 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:08Z","lastTransitionTime":"2025-12-05T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.839797 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.839916 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.839944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.839980 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.840005 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:08Z","lastTransitionTime":"2025-12-05T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.942381 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.942450 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.942474 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.942504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:08 crc kubenswrapper[4780]: I1205 06:47:08.942526 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:08Z","lastTransitionTime":"2025-12-05T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.044711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.044789 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.044806 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.044862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.044922 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:09Z","lastTransitionTime":"2025-12-05T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.138406 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.138448 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.138562 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:09 crc kubenswrapper[4780]: E1205 06:47:09.138674 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:09 crc kubenswrapper[4780]: E1205 06:47:09.138825 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:09 crc kubenswrapper[4780]: E1205 06:47:09.138987 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.147492 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.147528 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.147540 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.147559 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.147575 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:09Z","lastTransitionTime":"2025-12-05T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.250313 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.250401 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.250421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.250849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.251148 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:09Z","lastTransitionTime":"2025-12-05T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.354823 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.354948 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.354980 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.355012 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.355037 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:09Z","lastTransitionTime":"2025-12-05T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.458091 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.458152 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.458169 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.458194 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.458212 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:09Z","lastTransitionTime":"2025-12-05T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.561716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.561767 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.561786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.561811 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.561829 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:09Z","lastTransitionTime":"2025-12-05T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.665913 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.666041 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.666054 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.666073 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.666082 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:09Z","lastTransitionTime":"2025-12-05T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.769298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.769374 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.769402 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.769440 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.769465 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:09Z","lastTransitionTime":"2025-12-05T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.872465 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.872505 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.872515 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.872528 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.872537 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:09Z","lastTransitionTime":"2025-12-05T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.975996 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.976091 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.976131 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.976160 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:09 crc kubenswrapper[4780]: I1205 06:47:09.976178 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:09Z","lastTransitionTime":"2025-12-05T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.084496 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.084569 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.084589 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.084619 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.084638 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:10Z","lastTransitionTime":"2025-12-05T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.139328 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:10 crc kubenswrapper[4780]: E1205 06:47:10.139533 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.187463 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.187706 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.187771 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.187843 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.187926 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:10Z","lastTransitionTime":"2025-12-05T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.291008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.291259 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.291407 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.291482 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.291543 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:10Z","lastTransitionTime":"2025-12-05T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.394544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.394636 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.394657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.394687 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.394709 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:10Z","lastTransitionTime":"2025-12-05T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.497024 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.497316 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.497406 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.497499 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.497591 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:10Z","lastTransitionTime":"2025-12-05T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.601272 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.601329 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.601338 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.601356 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.601366 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:10Z","lastTransitionTime":"2025-12-05T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.704968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.705018 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.705028 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.705045 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.705057 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:10Z","lastTransitionTime":"2025-12-05T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.809564 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.809655 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.809668 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.809691 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.809710 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:10Z","lastTransitionTime":"2025-12-05T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.913255 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.913336 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.913356 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.913388 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:10 crc kubenswrapper[4780]: I1205 06:47:10.913410 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:10Z","lastTransitionTime":"2025-12-05T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.015680 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.015724 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.015735 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.015750 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.015762 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:11Z","lastTransitionTime":"2025-12-05T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.118456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.118487 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.118495 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.118511 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.118522 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:11Z","lastTransitionTime":"2025-12-05T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.138024 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.138069 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:11 crc kubenswrapper[4780]: E1205 06:47:11.138139 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.138025 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:11 crc kubenswrapper[4780]: E1205 06:47:11.138257 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:11 crc kubenswrapper[4780]: E1205 06:47:11.138408 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.220527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.220587 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.220604 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.220627 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.220644 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:11Z","lastTransitionTime":"2025-12-05T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.284457 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:11 crc kubenswrapper[4780]: E1205 06:47:11.284602 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:47:11 crc kubenswrapper[4780]: E1205 06:47:11.284660 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs podName:c29a8f3d-4c29-4bfe-a8ab-6d28970106be nodeName:}" failed. No retries permitted until 2025-12-05 06:47:43.284640051 +0000 UTC m=+97.354156383 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs") pod "network-metrics-daemon-zkjck" (UID: "c29a8f3d-4c29-4bfe-a8ab-6d28970106be") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.322701 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.322740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.322749 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.322764 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.322773 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:11Z","lastTransitionTime":"2025-12-05T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.426788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.426849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.426865 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.426925 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.426941 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:11Z","lastTransitionTime":"2025-12-05T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.529527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.529567 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.529579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.529597 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.529607 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:11Z","lastTransitionTime":"2025-12-05T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.633019 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.633059 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.633070 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.633085 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.633096 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:11Z","lastTransitionTime":"2025-12-05T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.735178 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.735409 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.735421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.735439 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.735449 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:11Z","lastTransitionTime":"2025-12-05T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.837912 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.837991 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.838007 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.838029 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.838091 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:11Z","lastTransitionTime":"2025-12-05T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.941997 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.942047 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.942062 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.942082 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:11 crc kubenswrapper[4780]: I1205 06:47:11.942096 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:11Z","lastTransitionTime":"2025-12-05T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.044021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.044064 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.044076 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.044092 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.044116 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:12Z","lastTransitionTime":"2025-12-05T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.138521 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:12 crc kubenswrapper[4780]: E1205 06:47:12.138662 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.149163 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.149199 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.149209 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.149222 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.149232 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:12Z","lastTransitionTime":"2025-12-05T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.251079 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.251140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.251183 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.251201 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.251211 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:12Z","lastTransitionTime":"2025-12-05T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.353390 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.353429 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.353440 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.353454 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.353465 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:12Z","lastTransitionTime":"2025-12-05T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.456417 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.456496 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.456521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.456556 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.456587 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:12Z","lastTransitionTime":"2025-12-05T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.493604 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bwf64_74991823-72ec-4b41-bb63-e92307688c30/kube-multus/0.log" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.493685 4780 generic.go:334] "Generic (PLEG): container finished" podID="74991823-72ec-4b41-bb63-e92307688c30" containerID="13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70" exitCode=1 Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.493740 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bwf64" event={"ID":"74991823-72ec-4b41-bb63-e92307688c30","Type":"ContainerDied","Data":"13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.494422 4780 scope.go:117] "RemoveContainer" containerID="13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.561863 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.562323 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.562338 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.562361 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.562377 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:12Z","lastTransitionTime":"2025-12-05T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.562969 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00890d8-91f0-4740-9bc5-836838906e5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a2fe83b4cc33e6bc60f7fef9c625482cdc2157217840e07773635ecbf2a7801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad796f10dca1d6d7c0139938654ccfa863096e6339864267e2076659112801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f1a185cbe6ba08c0082100bf91075124f9e38c5c7c091503fbaabaf0eddb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.597243 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.611743 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.626670 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.657253 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.671653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.671718 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.671734 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.671754 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.671768 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:12Z","lastTransitionTime":"2025-12-05T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.678176 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.694938 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.709600 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.726234 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.740363 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.755933 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:11Z\\\",\\\"message\\\":\\\"2025-12-05T06:46:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca\\\\n2025-12-05T06:46:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca to /host/opt/cni/bin/\\\\n2025-12-05T06:46:26Z [verbose] multus-daemon started\\\\n2025-12-05T06:46:26Z [verbose] Readiness Indicator file check\\\\n2025-12-05T06:47:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.773384 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.774813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.774860 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.774874 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.774917 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.774931 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:12Z","lastTransitionTime":"2025-12-05T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.785511 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.796331 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.808958 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.825122 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:53Z\\\",\\\"message\\\":\\\" openshift-machine-config-operator/machine-config-daemon-mjftd openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-lf5cd]\\\\nI1205 06:46:53.012458 6430 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1205 06:46:53.012471 6430 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012479 6430 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012488 6430 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd in node crc\\\\nI1205 06:46:53.012493 6430 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd after 0 failed attempt(s)\\\\nI1205 06:46:53.012498 6430 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012510 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 06:46:53.012563 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.836340 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:12Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.877179 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.877230 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.877244 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.877261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.877273 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:12Z","lastTransitionTime":"2025-12-05T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.980061 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.980218 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.980295 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.980369 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:12 crc kubenswrapper[4780]: I1205 06:47:12.980432 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:12Z","lastTransitionTime":"2025-12-05T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.083313 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.083345 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.083354 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.083366 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.083375 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:13Z","lastTransitionTime":"2025-12-05T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.138259 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.138292 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:13 crc kubenswrapper[4780]: E1205 06:47:13.138374 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:13 crc kubenswrapper[4780]: E1205 06:47:13.138462 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.138632 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:13 crc kubenswrapper[4780]: E1205 06:47:13.138813 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.185822 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.186139 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.186224 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.186310 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.186373 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:13Z","lastTransitionTime":"2025-12-05T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.289964 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.290497 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.290610 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.290721 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.291324 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:13Z","lastTransitionTime":"2025-12-05T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.394223 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.394267 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.394278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.394294 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.394308 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:13Z","lastTransitionTime":"2025-12-05T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.497019 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.497066 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.497077 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.497096 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.497108 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:13Z","lastTransitionTime":"2025-12-05T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.500595 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bwf64_74991823-72ec-4b41-bb63-e92307688c30/kube-multus/0.log" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.500716 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bwf64" event={"ID":"74991823-72ec-4b41-bb63-e92307688c30","Type":"ContainerStarted","Data":"ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a"} Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.521665 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.536948 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.552526 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:11Z\\\",\\\"message\\\":\\\"2025-12-05T06:46:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca\\\\n2025-12-05T06:46:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca to /host/opt/cni/bin/\\\\n2025-12-05T06:46:26Z [verbose] multus-daemon started\\\\n2025-12-05T06:46:26Z [verbose] Readiness Indicator file check\\\\n2025-12-05T06:47:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.568829 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.584555 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.597669 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.599785 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.599869 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.600011 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.600098 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.600145 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:13Z","lastTransitionTime":"2025-12-05T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.619524 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.631429 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.644905 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.673010 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:53Z\\\",\\\"message\\\":\\\" openshift-machine-config-operator/machine-config-daemon-mjftd openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-lf5cd]\\\\nI1205 06:46:53.012458 6430 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1205 06:46:53.012471 6430 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012479 6430 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012488 6430 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd in node crc\\\\nI1205 06:46:53.012493 6430 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd after 0 failed attempt(s)\\\\nI1205 06:46:53.012498 6430 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012510 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 06:46:53.012563 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.689527 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.703211 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.703326 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.703355 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.703391 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.703407 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.703488 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:13Z","lastTransitionTime":"2025-12-05T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.719249 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.732454 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.744093 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00890d8-91f0-4740-9bc5-836838906e5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a2fe83b4cc33e6bc60f7fef9c625482cdc2157217840e07773635ecbf2a7801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad796f10dca1d6d7c0139938654ccfa863096e6339864267e2076659112801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f1a185cbe6ba08c0082100bf91075124f9e38c5c7c091503fbaabaf0eddb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.764474 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.782465 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:13Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.806171 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.806238 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.806253 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.806278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.806297 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:13Z","lastTransitionTime":"2025-12-05T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.908833 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.908896 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.908909 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.908924 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:13 crc kubenswrapper[4780]: I1205 06:47:13.908934 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:13Z","lastTransitionTime":"2025-12-05T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.012582 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.012637 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.012648 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.012666 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.012676 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:14Z","lastTransitionTime":"2025-12-05T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.115681 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.115727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.115740 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.115759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.115775 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:14Z","lastTransitionTime":"2025-12-05T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.138802 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:14 crc kubenswrapper[4780]: E1205 06:47:14.139006 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.218580 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.218624 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.218638 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.218658 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.218667 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:14Z","lastTransitionTime":"2025-12-05T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.320798 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.320867 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.320905 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.320935 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.320956 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:14Z","lastTransitionTime":"2025-12-05T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.423772 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.423840 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.423863 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.423924 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.423945 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:14Z","lastTransitionTime":"2025-12-05T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.526617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.526679 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.526693 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.526717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.526734 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:14Z","lastTransitionTime":"2025-12-05T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.629352 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.629477 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.629504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.629541 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.629565 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:14Z","lastTransitionTime":"2025-12-05T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.732194 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.732343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.732363 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.732434 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.732456 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:14Z","lastTransitionTime":"2025-12-05T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.835402 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.835499 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.835525 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.835559 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.835594 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:14Z","lastTransitionTime":"2025-12-05T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.938969 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.939017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.939029 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.939047 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:14 crc kubenswrapper[4780]: I1205 06:47:14.939059 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:14Z","lastTransitionTime":"2025-12-05T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.041836 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.041912 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.041923 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.041943 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.041957 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.138160 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.138207 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.138274 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:15 crc kubenswrapper[4780]: E1205 06:47:15.138351 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:15 crc kubenswrapper[4780]: E1205 06:47:15.138596 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:15 crc kubenswrapper[4780]: E1205 06:47:15.138923 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.145376 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.145402 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.145411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.145427 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.145436 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.151413 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.248906 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.248944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.248953 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.248969 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.248979 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.352514 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.352562 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.352575 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.352595 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.352607 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.456090 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.456145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.456158 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.456180 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.456196 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.559116 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.559227 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.559249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.559315 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.559337 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.663144 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.663191 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.663201 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.663220 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.663236 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.766664 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.766707 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.766715 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.766727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.766735 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.792094 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.792180 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.792206 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.792244 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.792269 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: E1205 06:47:15.810354 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:15Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.817067 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.817127 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.817149 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.817180 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.817202 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: E1205 06:47:15.836477 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:15Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.842385 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.842456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.842467 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.842489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.842499 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: E1205 06:47:15.861107 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:15Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.867008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.867049 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.867066 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.867083 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.867098 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: E1205 06:47:15.887528 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:15Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.892831 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.892920 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.892944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.892971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.892994 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:15 crc kubenswrapper[4780]: E1205 06:47:15.915092 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:15Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:15 crc kubenswrapper[4780]: E1205 06:47:15.915333 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.917148 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.917193 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.917210 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.917233 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:15 crc kubenswrapper[4780]: I1205 06:47:15.917250 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:15Z","lastTransitionTime":"2025-12-05T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.021015 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.021087 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.021104 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.021128 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.021145 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:16Z","lastTransitionTime":"2025-12-05T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.124756 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.124797 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.124807 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.124823 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.124835 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:16Z","lastTransitionTime":"2025-12-05T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.138224 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:16 crc kubenswrapper[4780]: E1205 06:47:16.138330 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.152545 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.168899 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.186857 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:11Z\\\",\\\"message\\\":\\\"2025-12-05T06:46:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca\\\\n2025-12-05T06:46:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca to /host/opt/cni/bin/\\\\n2025-12-05T06:46:26Z [verbose] multus-daemon started\\\\n2025-12-05T06:46:26Z [verbose] Readiness Indicator file check\\\\n2025-12-05T06:47:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.201935 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.218033 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.227106 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.227150 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.227187 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.227207 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.227217 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:16Z","lastTransitionTime":"2025-12-05T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.229595 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e2ff678-444d-492c-86b0-0b37c214ece9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://724618240abd2923cb38c092c4f80ae54fcc94755bfd818464bbfbad0453dd21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.242046 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.260386 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.273232 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.287508 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.307955 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:53Z\\\",\\\"message\\\":\\\" openshift-machine-config-operator/machine-config-daemon-mjftd openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-lf5cd]\\\\nI1205 06:46:53.012458 6430 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1205 06:46:53.012471 6430 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012479 6430 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012488 6430 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd in node crc\\\\nI1205 06:46:53.012493 6430 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd after 0 failed attempt(s)\\\\nI1205 06:46:53.012498 6430 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012510 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 06:46:53.012563 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.320300 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.330500 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.330551 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.330564 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.330583 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.330594 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:16Z","lastTransitionTime":"2025-12-05T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.334914 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.349627 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.360172 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.371048 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00890d8-91f0-4740-9bc5-836838906e5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a2fe83b4cc33e6bc60f7fef9c625482cdc2157217840e07773635ecbf2a7801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad796f10dca1d6d7c0139938654ccfa863096e6339864267e2076659112801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f1a185cbe6ba08c0082100bf91075124f9e38c5c7c091503fbaabaf0eddb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.385614 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.399230 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:16Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.432158 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.432190 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.432198 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.432212 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.432220 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:16Z","lastTransitionTime":"2025-12-05T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.534817 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.534847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.534856 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.534933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.534944 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:16Z","lastTransitionTime":"2025-12-05T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.639896 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.639933 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.639941 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.639957 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.639969 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:16Z","lastTransitionTime":"2025-12-05T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.743044 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.743089 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.743099 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.743115 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.743125 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:16Z","lastTransitionTime":"2025-12-05T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.845185 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.845220 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.845228 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.845242 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.845251 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:16Z","lastTransitionTime":"2025-12-05T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.947515 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.947553 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.947565 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.947579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:16 crc kubenswrapper[4780]: I1205 06:47:16.947589 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:16Z","lastTransitionTime":"2025-12-05T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.049747 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.049780 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.049789 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.049803 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.049811 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:17Z","lastTransitionTime":"2025-12-05T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.138654 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.138765 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.138935 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:17 crc kubenswrapper[4780]: E1205 06:47:17.138942 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:17 crc kubenswrapper[4780]: E1205 06:47:17.139031 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:17 crc kubenswrapper[4780]: E1205 06:47:17.139114 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.152667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.152716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.152725 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.152744 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.152758 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:17Z","lastTransitionTime":"2025-12-05T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.255489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.255531 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.255541 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.255564 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.255576 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:17Z","lastTransitionTime":"2025-12-05T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.358996 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.359030 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.359039 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.359052 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.359061 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:17Z","lastTransitionTime":"2025-12-05T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.461552 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.461599 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.461611 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.461627 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.461636 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:17Z","lastTransitionTime":"2025-12-05T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.564117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.564157 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.564170 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.564186 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.564197 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:17Z","lastTransitionTime":"2025-12-05T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.665938 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.665993 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.666002 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.666021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.666035 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:17Z","lastTransitionTime":"2025-12-05T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.767961 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.768004 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.768019 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.768038 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.768049 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:17Z","lastTransitionTime":"2025-12-05T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.870143 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.870183 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.870192 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.870209 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.870218 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:17Z","lastTransitionTime":"2025-12-05T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.972823 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.972868 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.972904 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.972923 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:17 crc kubenswrapper[4780]: I1205 06:47:17.972933 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:17Z","lastTransitionTime":"2025-12-05T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.075914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.075956 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.075965 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.075981 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.075995 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:18Z","lastTransitionTime":"2025-12-05T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.138822 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:18 crc kubenswrapper[4780]: E1205 06:47:18.139109 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.178114 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.178170 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.178185 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.178206 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.178222 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:18Z","lastTransitionTime":"2025-12-05T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.280401 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.280495 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.280523 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.280563 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.280587 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:18Z","lastTransitionTime":"2025-12-05T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.385432 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.385508 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.385529 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.385557 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.385577 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:18Z","lastTransitionTime":"2025-12-05T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.488326 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.488372 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.488381 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.488397 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.488406 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:18Z","lastTransitionTime":"2025-12-05T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.591100 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.591151 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.591162 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.591181 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.591196 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:18Z","lastTransitionTime":"2025-12-05T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.694964 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.695040 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.695055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.695081 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.695098 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:18Z","lastTransitionTime":"2025-12-05T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.798276 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.798363 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.798387 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.798421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.798448 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:18Z","lastTransitionTime":"2025-12-05T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.900743 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.900770 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.900779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.900791 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:18 crc kubenswrapper[4780]: I1205 06:47:18.900800 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:18Z","lastTransitionTime":"2025-12-05T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.003400 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.003451 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.003471 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.003490 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.003501 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:19Z","lastTransitionTime":"2025-12-05T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.105554 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.105607 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.105619 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.105635 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.105645 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:19Z","lastTransitionTime":"2025-12-05T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.137985 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.138129 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.138146 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:19 crc kubenswrapper[4780]: E1205 06:47:19.138264 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:19 crc kubenswrapper[4780]: E1205 06:47:19.138390 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:19 crc kubenswrapper[4780]: E1205 06:47:19.138457 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.139255 4780 scope.go:117] "RemoveContainer" containerID="bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.207904 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.207954 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.207966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.207988 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.208000 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:19Z","lastTransitionTime":"2025-12-05T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.310979 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.311025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.311035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.311048 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.311057 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:19Z","lastTransitionTime":"2025-12-05T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.414050 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.414132 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.414154 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.414180 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.414198 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:19Z","lastTransitionTime":"2025-12-05T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.516721 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.517109 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.517128 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.517150 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.517166 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:19Z","lastTransitionTime":"2025-12-05T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.522291 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/2.log" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.525407 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.526834 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.539646 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.557399 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.571300 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.590252 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.607604 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:53Z\\\",\\\"message\\\":\\\" openshift-machine-config-operator/machine-config-daemon-mjftd openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-lf5cd]\\\\nI1205 06:46:53.012458 6430 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1205 06:46:53.012471 6430 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012479 6430 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012488 6430 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd in node crc\\\\nI1205 06:46:53.012493 6430 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd after 0 failed attempt(s)\\\\nI1205 06:46:53.012498 6430 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012510 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 06:46:53.012563 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.620590 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.620642 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.620655 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.620672 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.620686 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:19Z","lastTransitionTime":"2025-12-05T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.623447 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.647322 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.667860 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.683718 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.709517 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00890d8-91f0-4740-9bc5-836838906e5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a2fe83b4cc33e6bc60f7fef9c625482cdc2157217840e07773635ecbf2a7801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad796f10dca1d6d7c0139938654ccfa863096e6339864267e2076659112801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f1a185cbe6ba08c0082100bf91075124f9e38c5c7c091503fbaabaf0eddb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.722951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.722985 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.722994 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.723009 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.723021 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:19Z","lastTransitionTime":"2025-12-05T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.726205 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.741594 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.759721 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.773194 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.785900 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:11Z\\\",\\\"message\\\":\\\"2025-12-05T06:46:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca\\\\n2025-12-05T06:46:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca to /host/opt/cni/bin/\\\\n2025-12-05T06:46:26Z [verbose] multus-daemon started\\\\n2025-12-05T06:46:26Z [verbose] Readiness Indicator file check\\\\n2025-12-05T06:47:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.797203 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.809329 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.818271 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e2ff678-444d-492c-86b0-0b37c214ece9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://724618240abd2923cb38c092c4f80ae54fcc94755bfd818464bbfbad0453dd21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:19Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.824992 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.825021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.825029 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.825072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.825082 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:19Z","lastTransitionTime":"2025-12-05T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.927058 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.927085 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.927093 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.927106 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:19 crc kubenswrapper[4780]: I1205 06:47:19.927114 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:19Z","lastTransitionTime":"2025-12-05T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.029077 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.029153 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.029163 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.029175 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.029184 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:20Z","lastTransitionTime":"2025-12-05T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.131772 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.131823 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.131839 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.131861 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.131906 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:20Z","lastTransitionTime":"2025-12-05T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.138162 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:20 crc kubenswrapper[4780]: E1205 06:47:20.138316 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.234176 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.234214 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.234223 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.234240 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.234249 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:20Z","lastTransitionTime":"2025-12-05T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.336428 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.336478 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.336489 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.336505 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.336517 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:20Z","lastTransitionTime":"2025-12-05T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.439414 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.439457 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.439481 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.439497 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.439508 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:20Z","lastTransitionTime":"2025-12-05T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.530837 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/3.log" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.531587 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/2.log" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.539046 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" exitCode=1 Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.539101 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.539204 4780 scope.go:117] "RemoveContainer" containerID="bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.540695 4780 scope.go:117] "RemoveContainer" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:47:20 crc kubenswrapper[4780]: E1205 06:47:20.541101 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.543213 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.543251 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.543264 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.543283 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.543295 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:20Z","lastTransitionTime":"2025-12-05T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.554650 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.574117 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbe5e0faff988f07bb137a20aa014af4257c7da862d8d17c4833c803654665ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:46:53Z\\\",\\\"message\\\":\\\" openshift-machine-config-operator/machine-config-daemon-mjftd openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-lf5cd]\\\\nI1205 06:46:53.012458 6430 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1205 06:46:53.012471 6430 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012479 6430 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012488 6430 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd in node crc\\\\nI1205 06:46:53.012493 6430 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-lf5cd after 0 failed attempt(s)\\\\nI1205 06:46:53.012498 6430 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-lf5cd\\\\nI1205 06:46:53.012510 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1205 06:46:53.012563 6430 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:20Z\\\",\\\"message\\\":\\\"ed: retrying failed objects of type *v1.Pod\\\\nI1205 06:47:20.093604 6793 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-multus/network-metrics-daemon-zkjck openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-lf5cd openshift-dns/node-resolver-frbm8 openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-mjftd openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-bwf64 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc]\\\\nF1205 06:47:20.093663 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.587930 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.600799 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.611835 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00890d8-91f0-4740-9bc5-836838906e5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a2fe83b4cc33e6bc60f7fef9c625482cdc2157217840e07773635ecbf2a7801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad796f10dca1d6d7c0139938654ccfa863096e6339864267e2076659112801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f1a185cbe6ba08c0082100bf91075124f9e38c5c7c091503fbaabaf0eddb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.620682 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.629779 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.639721 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.644967 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.644994 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.645003 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.645026 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.645038 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:20Z","lastTransitionTime":"2025-12-05T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.652246 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.663188 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:11Z\\\",\\\"message\\\":\\\"2025-12-05T06:46:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca\\\\n2025-12-05T06:46:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca to /host/opt/cni/bin/\\\\n2025-12-05T06:46:26Z [verbose] multus-daemon started\\\\n2025-12-05T06:46:26Z [verbose] Readiness Indicator file check\\\\n2025-12-05T06:47:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.674285 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.685193 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.693764 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e2ff678-444d-492c-86b0-0b37c214ece9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://724618240abd2923cb38c092c4f80ae54fcc94755bfd818464bbfbad0453dd21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.703780 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.712763 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.722178 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.730018 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.739681 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:20Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.747354 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.747398 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.747410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.747428 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.747441 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:20Z","lastTransitionTime":"2025-12-05T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.849543 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.849592 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.849603 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.849621 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.849631 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:20Z","lastTransitionTime":"2025-12-05T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.951753 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.951799 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.951810 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.951828 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:20 crc kubenswrapper[4780]: I1205 06:47:20.951842 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:20Z","lastTransitionTime":"2025-12-05T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.054242 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.054302 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.054396 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.054430 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.054462 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:21Z","lastTransitionTime":"2025-12-05T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.138276 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.138322 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:21 crc kubenswrapper[4780]: E1205 06:47:21.138412 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.138502 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:21 crc kubenswrapper[4780]: E1205 06:47:21.138567 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:21 crc kubenswrapper[4780]: E1205 06:47:21.138745 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.157062 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.157095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.157103 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.157117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.157126 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:21Z","lastTransitionTime":"2025-12-05T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.259134 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.259198 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.259222 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.259249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.259271 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:21Z","lastTransitionTime":"2025-12-05T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.361746 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.361801 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.361819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.361846 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.361864 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:21Z","lastTransitionTime":"2025-12-05T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.464142 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.464205 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.464226 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.464254 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.464316 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:21Z","lastTransitionTime":"2025-12-05T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.544032 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/3.log" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.547970 4780 scope.go:117] "RemoveContainer" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:47:21 crc kubenswrapper[4780]: E1205 06:47:21.548139 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.566544 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.566577 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.566588 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.566602 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.566614 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:21Z","lastTransitionTime":"2025-12-05T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.568507 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:20Z\\\",\\\"message\\\":\\\"ed: retrying failed objects of type *v1.Pod\\\\nI1205 06:47:20.093604 6793 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-multus/network-metrics-daemon-zkjck openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-lf5cd openshift-dns/node-resolver-frbm8 openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-mjftd openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-bwf64 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc]\\\\nF1205 06:47:20.093663 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.581609 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.594934 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.608063 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.619642 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.632769 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.648995 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.657603 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.667612 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00890d8-91f0-4740-9bc5-836838906e5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a2fe83b4cc33e6bc60f7fef9c625482cdc2157217840e07773635ecbf2a7801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad796f10dca1d6d7c0139938654ccfa863096e6339864267e2076659112801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f1a185cbe6ba08c0082100bf91075124f9e38c5c7c091503fbaabaf0eddb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.669653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.669686 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.669697 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.669716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.669728 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:21Z","lastTransitionTime":"2025-12-05T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.678426 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.688287 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e2ff678-444d-492c-86b0-0b37c214ece9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://724618240abd2923cb38c092c4f80ae54fcc94755bfd818464bbfbad0453dd21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.700023 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.711262 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.721202 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:11Z\\\",\\\"message\\\":\\\"2025-12-05T06:46:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca\\\\n2025-12-05T06:46:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca to /host/opt/cni/bin/\\\\n2025-12-05T06:46:26Z [verbose] multus-daemon started\\\\n2025-12-05T06:46:26Z [verbose] Readiness Indicator file check\\\\n2025-12-05T06:47:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.732300 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.742475 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.752606 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.762977 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:21Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.771233 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.771286 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.771300 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.771316 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.771329 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:21Z","lastTransitionTime":"2025-12-05T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.874265 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.874316 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.874328 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.874342 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.874351 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:21Z","lastTransitionTime":"2025-12-05T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.977150 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.977225 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.977233 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.977247 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:21 crc kubenswrapper[4780]: I1205 06:47:21.977256 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:21Z","lastTransitionTime":"2025-12-05T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.079951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.079998 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.080016 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.080038 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.080054 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:22Z","lastTransitionTime":"2025-12-05T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.137767 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:22 crc kubenswrapper[4780]: E1205 06:47:22.137931 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.182041 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.182089 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.182099 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.182113 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.182122 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:22Z","lastTransitionTime":"2025-12-05T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.285178 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.285237 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.285253 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.285278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.285298 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:22Z","lastTransitionTime":"2025-12-05T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.387299 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.387330 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.387357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.387371 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.387380 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:22Z","lastTransitionTime":"2025-12-05T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.489929 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.489965 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.489975 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.489987 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.489995 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:22Z","lastTransitionTime":"2025-12-05T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.592302 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.592339 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.592350 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.592364 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.592373 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:22Z","lastTransitionTime":"2025-12-05T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.694597 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.694836 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.694965 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.695041 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.695115 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:22Z","lastTransitionTime":"2025-12-05T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.797466 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.797512 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.797522 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.797539 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.797550 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:22Z","lastTransitionTime":"2025-12-05T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.899195 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.899272 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.899286 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.899305 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:22 crc kubenswrapper[4780]: I1205 06:47:22.899317 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:22Z","lastTransitionTime":"2025-12-05T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.001994 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.002024 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.002032 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.002045 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.002053 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:23Z","lastTransitionTime":"2025-12-05T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.104731 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.104768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.104776 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.104790 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.104798 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:23Z","lastTransitionTime":"2025-12-05T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.138080 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.138161 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:23 crc kubenswrapper[4780]: E1205 06:47:23.138264 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.138305 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:23 crc kubenswrapper[4780]: E1205 06:47:23.138441 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:23 crc kubenswrapper[4780]: E1205 06:47:23.138629 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.207361 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.207494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.207513 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.207537 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.207554 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:23Z","lastTransitionTime":"2025-12-05T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.310135 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.310410 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.310501 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.310605 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.310700 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:23Z","lastTransitionTime":"2025-12-05T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.413728 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.413794 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.413816 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.413847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.413870 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:23Z","lastTransitionTime":"2025-12-05T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.516862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.516943 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.516956 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.516969 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.516980 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:23Z","lastTransitionTime":"2025-12-05T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.619915 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.619971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.619985 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.620010 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.620025 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:23Z","lastTransitionTime":"2025-12-05T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.722386 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.722470 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.722504 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.722536 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.722562 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:23Z","lastTransitionTime":"2025-12-05T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.825698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.825764 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.825788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.825820 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.825845 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:23Z","lastTransitionTime":"2025-12-05T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.928988 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.929028 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.929036 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.929050 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:23 crc kubenswrapper[4780]: I1205 06:47:23.929059 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:23Z","lastTransitionTime":"2025-12-05T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.031533 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.031609 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.031635 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.031666 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.031689 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:24Z","lastTransitionTime":"2025-12-05T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.133952 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.134017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.134030 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.134048 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.134060 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:24Z","lastTransitionTime":"2025-12-05T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.138105 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:24 crc kubenswrapper[4780]: E1205 06:47:24.138299 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.237368 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.237414 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.237426 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.237443 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.237455 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:24Z","lastTransitionTime":"2025-12-05T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.340617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.340673 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.340689 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.340711 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.340727 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:24Z","lastTransitionTime":"2025-12-05T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.442543 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.442584 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.442595 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.442610 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.442621 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:24Z","lastTransitionTime":"2025-12-05T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.544689 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.544729 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.544737 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.544750 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.544758 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:24Z","lastTransitionTime":"2025-12-05T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.647295 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.647337 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.647347 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.647362 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.647373 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:24Z","lastTransitionTime":"2025-12-05T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.749694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.749748 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.749760 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.749776 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.749789 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:24Z","lastTransitionTime":"2025-12-05T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.852661 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.852712 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.852723 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.852737 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.852747 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:24Z","lastTransitionTime":"2025-12-05T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.955691 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.955736 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.955746 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.955762 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:24 crc kubenswrapper[4780]: I1205 06:47:24.955771 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:24Z","lastTransitionTime":"2025-12-05T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.058049 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.058105 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.058118 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.058136 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.058146 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:25Z","lastTransitionTime":"2025-12-05T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.137754 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.137803 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.137817 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:25 crc kubenswrapper[4780]: E1205 06:47:25.137863 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:25 crc kubenswrapper[4780]: E1205 06:47:25.137998 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:25 crc kubenswrapper[4780]: E1205 06:47:25.138203 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.160456 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.160501 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.160513 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.160529 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.160541 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:25Z","lastTransitionTime":"2025-12-05T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.262547 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.262605 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.262614 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.262632 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.262640 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:25Z","lastTransitionTime":"2025-12-05T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.364744 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.364785 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.364793 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.364810 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.364819 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:25Z","lastTransitionTime":"2025-12-05T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.467237 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.467268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.467276 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.467288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.467296 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:25Z","lastTransitionTime":"2025-12-05T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.588221 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.588255 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.588288 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.588302 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.588310 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:25Z","lastTransitionTime":"2025-12-05T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.695572 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.695626 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.695638 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.695657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.695668 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:25Z","lastTransitionTime":"2025-12-05T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.798307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.798350 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.798360 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.798375 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.798384 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:25Z","lastTransitionTime":"2025-12-05T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.900805 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.900847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.900857 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.900872 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:25 crc kubenswrapper[4780]: I1205 06:47:25.900909 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:25Z","lastTransitionTime":"2025-12-05T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.004014 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.004059 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.004072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.004089 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.004102 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.106189 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.106234 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.106249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.106263 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.106273 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.138145 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:26 crc kubenswrapper[4780]: E1205 06:47:26.138344 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.153561 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.167673 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.179633 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.179692 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.179705 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.179723 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.179736 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.187434 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:11Z\\\",\\\"message\\\":\\\"2025-12-05T06:46:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca\\\\n2025-12-05T06:46:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca to /host/opt/cni/bin/\\\\n2025-12-05T06:46:26Z [verbose] multus-daemon started\\\\n2025-12-05T06:46:26Z [verbose] Readiness Indicator file check\\\\n2025-12-05T06:47:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: E1205 06:47:26.195309 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.198060 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.198093 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.198103 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.198115 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.198125 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.203007 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: E1205 06:47:26.213798 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.217157 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.217191 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.217210 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.217240 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.217254 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.218774 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.229731 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e2ff678-444d-492c-86b0-0b37c214ece9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://724618240abd2923cb38c092c4f80ae54fcc94755bfd818464bbfbad0453dd21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: E1205 06:47:26.232125 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.234602 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.234641 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.234653 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.234668 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.234679 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.239092 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: E1205 06:47:26.248751 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.250955 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.251767 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.251795 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.251805 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.251817 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.251825 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.259059 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: E1205 06:47:26.261824 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: E1205 06:47:26.261969 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.262967 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.263000 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.263008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.263022 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.263031 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.270246 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.286818 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:20Z\\\",\\\"message\\\":\\\"ed: retrying failed objects of type *v1.Pod\\\\nI1205 06:47:20.093604 6793 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-multus/network-metrics-daemon-zkjck openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-lf5cd openshift-dns/node-resolver-frbm8 openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-mjftd openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-bwf64 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc]\\\\nF1205 06:47:20.093663 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.296999 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.307378 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.320189 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.330009 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.340300 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00890d8-91f0-4740-9bc5-836838906e5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a2fe83b4cc33e6bc60f7fef9c625482cdc2157217840e07773635ecbf2a7801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad796f10dca1d6d7c0139938654ccfa863096e6339864267e2076659112801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f1a185cbe6ba08c0082100bf91075124f9e38c5c7c091503fbaabaf0eddb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.353785 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.363735 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:26Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.365025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.365059 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.365073 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.365092 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.365107 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.467930 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.468010 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.468035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.468066 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.468088 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.569944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.569978 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.569986 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.570000 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.570010 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.672346 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.672579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.672627 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.672647 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.672656 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.774609 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.774655 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.774665 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.774681 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.774691 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.876989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.877034 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.877043 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.877056 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.877068 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.979361 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.979425 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.979444 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.979468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:26 crc kubenswrapper[4780]: I1205 06:47:26.979484 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:26Z","lastTransitionTime":"2025-12-05T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.081858 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.081951 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.081968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.081993 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.082010 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:27Z","lastTransitionTime":"2025-12-05T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.137904 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.137971 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.138000 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:27 crc kubenswrapper[4780]: E1205 06:47:27.138044 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:27 crc kubenswrapper[4780]: E1205 06:47:27.138165 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:27 crc kubenswrapper[4780]: E1205 06:47:27.138316 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.185145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.185189 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.185201 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.185217 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.185230 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:27Z","lastTransitionTime":"2025-12-05T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.287839 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.287909 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.287925 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.287941 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.287950 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:27Z","lastTransitionTime":"2025-12-05T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.390597 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.390645 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.390656 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.390671 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.390682 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:27Z","lastTransitionTime":"2025-12-05T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.493323 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.493366 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.493380 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.493397 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.493410 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:27Z","lastTransitionTime":"2025-12-05T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.595356 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.595418 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.595441 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.595469 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.595516 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:27Z","lastTransitionTime":"2025-12-05T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.697617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.697659 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.697670 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.697687 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.697701 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:27Z","lastTransitionTime":"2025-12-05T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.800123 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.800162 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.800173 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.800187 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.800197 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:27Z","lastTransitionTime":"2025-12-05T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.902808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.902852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.902864 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.902897 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:27 crc kubenswrapper[4780]: I1205 06:47:27.902909 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:27Z","lastTransitionTime":"2025-12-05T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.007319 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.007353 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.007364 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.007378 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.007387 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:28Z","lastTransitionTime":"2025-12-05T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.128642 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.128743 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.128760 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.128779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.128790 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:28Z","lastTransitionTime":"2025-12-05T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.138092 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:28 crc kubenswrapper[4780]: E1205 06:47:28.138321 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.230971 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.231030 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.231047 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.231072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.231088 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:28Z","lastTransitionTime":"2025-12-05T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.333691 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.333726 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.333737 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.333753 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.333764 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:28Z","lastTransitionTime":"2025-12-05T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.436163 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.436575 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.436732 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.436974 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.437118 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:28Z","lastTransitionTime":"2025-12-05T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.539800 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.540188 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.540362 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.540521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.540661 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:28Z","lastTransitionTime":"2025-12-05T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.644213 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.644268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.644286 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.644311 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.644329 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:28Z","lastTransitionTime":"2025-12-05T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.747230 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.747587 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.747599 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.747620 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.747635 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:28Z","lastTransitionTime":"2025-12-05T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.849924 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.850101 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.850139 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.850174 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.850199 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:28Z","lastTransitionTime":"2025-12-05T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.953262 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.953606 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.953743 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.953910 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:28 crc kubenswrapper[4780]: I1205 06:47:28.954035 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:28Z","lastTransitionTime":"2025-12-05T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.057311 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.057380 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.057397 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.057421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.057438 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:29Z","lastTransitionTime":"2025-12-05T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.138680 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.138813 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.139060 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.139171 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.139315 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.139432 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.159056 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.159324 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:33.159280221 +0000 UTC m=+147.228796603 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.159675 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.159966 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.160015 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.160038 4780 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.160123 4780 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.160131 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 06:48:33.160106875 +0000 UTC m=+147.229623247 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.160230 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:48:33.160201777 +0000 UTC m=+147.229718149 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.160364 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.160400 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.160407 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.160421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.160431 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:29Z","lastTransitionTime":"2025-12-05T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.159982 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.161121 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.161194 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.161290 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.161306 4780 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.161317 4780 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.161363 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 06:48:33.161345769 +0000 UTC m=+147.230862101 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.161368 4780 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:47:29 crc kubenswrapper[4780]: E1205 06:47:29.161446 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 06:48:33.161421761 +0000 UTC m=+147.230938143 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.264672 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.264753 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.264788 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.264819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.264842 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:29Z","lastTransitionTime":"2025-12-05T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.367477 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.367564 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.367592 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.367623 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.367645 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:29Z","lastTransitionTime":"2025-12-05T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.470428 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.470480 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.470493 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.470513 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.470533 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:29Z","lastTransitionTime":"2025-12-05T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.573152 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.573198 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.573209 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.573225 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.573236 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:29Z","lastTransitionTime":"2025-12-05T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.676658 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.676712 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.676729 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.676752 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.676797 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:29Z","lastTransitionTime":"2025-12-05T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.779259 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.779292 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.779300 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.779314 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.779331 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:29Z","lastTransitionTime":"2025-12-05T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.881444 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.881482 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.881490 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.881506 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.881515 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:29Z","lastTransitionTime":"2025-12-05T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.983861 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.983998 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.984021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.984053 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:29 crc kubenswrapper[4780]: I1205 06:47:29.984075 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:29Z","lastTransitionTime":"2025-12-05T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.087231 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.087294 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.087312 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.087364 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.087382 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:30Z","lastTransitionTime":"2025-12-05T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.137924 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:30 crc kubenswrapper[4780]: E1205 06:47:30.138080 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.189779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.189833 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.189851 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.189871 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.189915 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:30Z","lastTransitionTime":"2025-12-05T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.292508 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.292578 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.292597 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.292621 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.292638 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:30Z","lastTransitionTime":"2025-12-05T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.395640 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.395696 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.395707 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.395724 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.395736 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:30Z","lastTransitionTime":"2025-12-05T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.498525 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.498570 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.498580 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.498595 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.498606 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:30Z","lastTransitionTime":"2025-12-05T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.602144 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.602227 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.602249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.602280 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.602302 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:30Z","lastTransitionTime":"2025-12-05T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.705226 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.705310 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.705353 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.705379 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.705397 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:30Z","lastTransitionTime":"2025-12-05T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.808562 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.808643 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.808660 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.808687 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.808706 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:30Z","lastTransitionTime":"2025-12-05T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.917577 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.917687 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.917708 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.917736 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:30 crc kubenswrapper[4780]: I1205 06:47:30.917754 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:30Z","lastTransitionTime":"2025-12-05T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.020495 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.020564 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.020581 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.020605 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.020622 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:31Z","lastTransitionTime":"2025-12-05T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.123652 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.123694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.123733 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.123749 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.123760 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:31Z","lastTransitionTime":"2025-12-05T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.138201 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.138226 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.138308 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:31 crc kubenswrapper[4780]: E1205 06:47:31.138429 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:31 crc kubenswrapper[4780]: E1205 06:47:31.138626 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:31 crc kubenswrapper[4780]: E1205 06:47:31.138793 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.226095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.226132 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.226140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.226153 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.226163 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:31Z","lastTransitionTime":"2025-12-05T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.328564 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.328619 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.328642 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.328670 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.328691 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:31Z","lastTransitionTime":"2025-12-05T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.431851 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.431928 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.431940 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.431956 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.431976 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:31Z","lastTransitionTime":"2025-12-05T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.534299 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.534350 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.534365 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.534384 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.534409 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:31Z","lastTransitionTime":"2025-12-05T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.637591 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.637635 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.637646 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.637662 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.637673 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:31Z","lastTransitionTime":"2025-12-05T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.740803 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.740851 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.740864 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.740908 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.740921 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:31Z","lastTransitionTime":"2025-12-05T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.844632 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.844698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.844715 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.844745 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.844762 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:31Z","lastTransitionTime":"2025-12-05T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.947689 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.947746 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.947756 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.947769 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:31 crc kubenswrapper[4780]: I1205 06:47:31.947778 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:31Z","lastTransitionTime":"2025-12-05T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.051271 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.051336 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.051353 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.051380 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.051397 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:32Z","lastTransitionTime":"2025-12-05T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.138002 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:32 crc kubenswrapper[4780]: E1205 06:47:32.138295 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.153732 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.153785 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.153798 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.153811 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.153821 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:32Z","lastTransitionTime":"2025-12-05T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.257508 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.257570 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.257580 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.257597 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.257607 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:32Z","lastTransitionTime":"2025-12-05T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.360592 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.360633 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.360642 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.360658 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.360667 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:32Z","lastTransitionTime":"2025-12-05T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.463613 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.463738 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.463771 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.463803 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.463831 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:32Z","lastTransitionTime":"2025-12-05T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.566298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.566356 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.566366 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.566381 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.566390 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:32Z","lastTransitionTime":"2025-12-05T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.668928 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.668975 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.668989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.669022 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.669046 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:32Z","lastTransitionTime":"2025-12-05T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.771025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.771073 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.771084 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.771104 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.771117 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:32Z","lastTransitionTime":"2025-12-05T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.874526 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.874586 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.874603 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.874633 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.874651 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:32Z","lastTransitionTime":"2025-12-05T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.977808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.977848 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.977859 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.977894 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:32 crc kubenswrapper[4780]: I1205 06:47:32.977904 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:32Z","lastTransitionTime":"2025-12-05T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.080253 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.080326 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.080349 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.080384 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.080410 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:33Z","lastTransitionTime":"2025-12-05T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.138261 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.138318 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.138360 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:33 crc kubenswrapper[4780]: E1205 06:47:33.138415 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:33 crc kubenswrapper[4780]: E1205 06:47:33.138536 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:33 crc kubenswrapper[4780]: E1205 06:47:33.138666 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.183637 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.183907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.183987 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.184119 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.184191 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:33Z","lastTransitionTime":"2025-12-05T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.287525 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.287589 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.287607 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.287631 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.287648 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:33Z","lastTransitionTime":"2025-12-05T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.391194 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.391260 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.391279 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.391303 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.391322 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:33Z","lastTransitionTime":"2025-12-05T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.494356 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.494420 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.494437 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.494458 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.494474 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:33Z","lastTransitionTime":"2025-12-05T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.597518 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.597581 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.597603 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.597634 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.597659 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:33Z","lastTransitionTime":"2025-12-05T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.700167 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.700232 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.700249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.700272 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.700290 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:33Z","lastTransitionTime":"2025-12-05T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.803097 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.803278 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.803298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.803323 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.803342 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:33Z","lastTransitionTime":"2025-12-05T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.906284 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.906344 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.906361 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.906384 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:33 crc kubenswrapper[4780]: I1205 06:47:33.906402 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:33Z","lastTransitionTime":"2025-12-05T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.009340 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.009408 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.009425 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.009449 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.009466 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:34Z","lastTransitionTime":"2025-12-05T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.112417 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.112457 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.112466 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.112481 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.112490 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:34Z","lastTransitionTime":"2025-12-05T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.138336 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:34 crc kubenswrapper[4780]: E1205 06:47:34.138531 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.216280 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.216373 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.216398 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.216426 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.216451 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:34Z","lastTransitionTime":"2025-12-05T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.319676 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.319766 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.319783 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.319807 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.319825 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:34Z","lastTransitionTime":"2025-12-05T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.422081 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.422145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.422163 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.422186 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.422204 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:34Z","lastTransitionTime":"2025-12-05T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.524705 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.524750 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.524763 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.524777 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.524786 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:34Z","lastTransitionTime":"2025-12-05T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.627579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.627627 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.627640 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.627657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.627673 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:34Z","lastTransitionTime":"2025-12-05T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.730141 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.730223 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.730245 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.730269 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.730286 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:34Z","lastTransitionTime":"2025-12-05T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.833232 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.833268 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.833276 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.833290 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.833299 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:34Z","lastTransitionTime":"2025-12-05T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.935512 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.935560 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.935572 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.935589 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:34 crc kubenswrapper[4780]: I1205 06:47:34.935600 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:34Z","lastTransitionTime":"2025-12-05T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.038702 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.038754 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.038767 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.038786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.038807 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:35Z","lastTransitionTime":"2025-12-05T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.138471 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:35 crc kubenswrapper[4780]: E1205 06:47:35.138658 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.138712 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.138729 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:35 crc kubenswrapper[4780]: E1205 06:47:35.138794 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:35 crc kubenswrapper[4780]: E1205 06:47:35.139172 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.141765 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.141811 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.141831 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.141855 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.141874 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:35Z","lastTransitionTime":"2025-12-05T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.245013 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.245054 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.245065 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.245080 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.245089 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:35Z","lastTransitionTime":"2025-12-05T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.348720 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.348784 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.348806 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.348836 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.348854 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:35Z","lastTransitionTime":"2025-12-05T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.451494 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.451539 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.451557 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.451584 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.451601 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:35Z","lastTransitionTime":"2025-12-05T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.554388 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.554424 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.554432 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.554447 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.554458 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:35Z","lastTransitionTime":"2025-12-05T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.656943 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.657001 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.657021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.657055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.657077 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:35Z","lastTransitionTime":"2025-12-05T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.760100 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.760178 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.760197 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.760220 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.760238 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:35Z","lastTransitionTime":"2025-12-05T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.862870 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.862920 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.862929 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.862945 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.862954 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:35Z","lastTransitionTime":"2025-12-05T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.965357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.965393 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.965404 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.965418 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:35 crc kubenswrapper[4780]: I1205 06:47:35.965428 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:35Z","lastTransitionTime":"2025-12-05T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.067966 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.068043 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.068058 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.068072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.068080 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.139213 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.140087 4780 scope.go:117] "RemoveContainer" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:47:36 crc kubenswrapper[4780]: E1205 06:47:36.140099 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:36 crc kubenswrapper[4780]: E1205 06:47:36.140298 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.160857 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ad1697d-e6c7-400d-b4cd-d38c29b1fbd2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b855dda59ff60dee0bf8afa152a02896b71d5fec317bb133a614afa78ae9897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://860f094f236864d8089841b46f5b6b854eaad58b1044bd26f8adf75242ad4d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e0979c168b95875d999a64485eecc1724b9f5b75897ac80ef4db71f3524dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.175106 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e2ff678-444d-492c-86b0-0b37c214ece9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://724618240abd2923cb38c092c4f80ae54fcc94755bfd818464bbfbad0453dd21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06377cd7b4c8ba05ef7dc1ed879c3c1fd5a88141c187bc91977c29d0d67e4ca2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.176071 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.176421 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.176470 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.176607 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.176630 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.193075 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aff252dc43ffc25d6c52a5d4f32c17c859a48ca0788207dfa28fc20dbbef03e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.213983 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a640087b-e493-4ac1-bef1-a9c05dd7c0ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae211e86b5661ac139f7d5581dee4038c0f4e2f6a6011f3e2022cae658addd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d78jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjftd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.237036 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bwf64" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74991823-72ec-4b41-bb63-e92307688c30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:11Z\\\",\\\"message\\\":\\\"2025-12-05T06:46:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca\\\\n2025-12-05T06:46:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eeabab97-9c59-48aa-883c-3ce7d98593ca to /host/opt/cni/bin/\\\\n2025-12-05T06:46:26Z [verbose] multus-daemon started\\\\n2025-12-05T06:46:26Z [verbose] Readiness Indicator file check\\\\n2025-12-05T06:47:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9nwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bwf64\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.255804 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52f67f7f-0fdd-4d00-aebe-1b29ae739ff1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T06:46:24Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1205 06:46:18.471297 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 06:46:18.472697 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-107678985/tls.crt::/tmp/serving-cert-107678985/tls.key\\\\\\\"\\\\nI1205 06:46:24.271855 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 06:46:24.275950 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 06:46:24.275986 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 06:46:24.276031 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 06:46:24.276044 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 06:46:24.284385 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1205 06:46:24.284455 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1205 06:46:24.284503 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284529 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 06:46:24.284551 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 06:46:24.284571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 06:46:24.284592 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 06:46:24.284612 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1205 06:46:24.286852 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.270798 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.279739 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.279828 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.279851 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.279917 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.279937 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.282899 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-frbm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79be6eea-5a91-47e1-8284-989d30c1a8b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e19220281a34f7403df4c39b0ebe55f99c8e2b9787131115e34c6500ec4d4173\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbwvq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-frbm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.295166 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-zkjck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bjk8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-zkjck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.314096 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61c4a70b-17c4-4f09-a541-5161825c4c03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T06:47:20Z\\\",\\\"message\\\":\\\"ed: retrying failed objects of type *v1.Pod\\\\nI1205 06:47:20.093604 6793 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-multus/network-metrics-daemon-zkjck openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/iptables-alerter-4ln5h openshift-ovn-kubernetes/ovnkube-node-lf5cd openshift-dns/node-resolver-frbm8 openshift-kube-controller-manager/kube-controller-manager-crc openshift-multus/multus-additional-cni-plugins-scs78 openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-machine-config-operator/machine-config-daemon-mjftd openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-bwf64 openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-machine-config-operator/kube-rbac-proxy-crio-crc]\\\\nF1205 06:47:20.093663 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T06:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrsmk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lf5cd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.326507 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8986ad8-ac4a-499b-bf48-363f358c1876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://727011b03210e5d9529141ec4d91ca896f1c0c6a30441b26dfd8cb280de492f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee0084aab0c1549968e26d26d736651d83c52cb074945de5aa7ebda04615ae44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhcqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tdkql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.340653 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7757aa5d1ced2ebcddad09495406f2b686c12b4d5e313ae05742a4c30138be45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19569ef939a25885c1d37a93a4abb06a5711c23f5185141f23fb65aca3715384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.357513 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40e86faaa29401ff88879c4cb34000e6c4b8602d44a43436057c08231046841f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.376679 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.382749 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.382821 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.382845 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.382914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.382942 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.390191 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.410404 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-scs78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cce73093-dc28-44a3-b6b6-e153e0f4d1ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://686207bd8154b28dfdfad6672c1c7e9e42cd10e07e03f053a93b92a8dd810c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05b8bb7850a14afb15a7af3e1d37971d1efb872dfeacb3b00ed979fc8e5e15f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26a9d18c01cfbd6226ba7619811c91f9bdebf8f2918601913e330933ea545d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8295997d9c5eec9336c1cae27fa4936e5aa39dd78812e8560860eab96837d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aea6e9f0e690d28770271fa898a2ba71d0937ca8ca77b02fac650a774ad926c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ad73f7c8d347b5e2c8db8717ea6c501374eaefeff23375cd68afba77153c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a028fd9629dc6e740e184f436d24322248048942a274fe6711234798536ca313\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk6sn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-scs78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.420948 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j76x7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f487d7de-9cce-457c-9dfa-09dfb392dde1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3d23f4350df05aff3c4fc28d256f761bce2a3ff420ffe0583bdfa9e1275d52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdrmb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j76x7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.436219 4780 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e00890d8-91f0-4740-9bc5-836838906e5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a2fe83b4cc33e6bc60f7fef9c625482cdc2157217840e07773635ecbf2a7801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dad796f10dca1d6d7c0139938654ccfa863096e6339864267e2076659112801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f1a185cbe6ba08c0082100bf91075124f9e38c5c7c091503fbaabaf0eddb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T06:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37883b8ad37cb3b6c015814c2dbba3bec67bfb845ba9f682fa49ae1f68dd184\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T06:46:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T06:46:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T06:46:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.484797 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.484905 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.484916 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.484928 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.484938 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.485743 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.485790 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.485802 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.485821 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.485834 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: E1205 06:47:36.498096 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.502698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.502739 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.502752 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.502773 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.502788 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: E1205 06:47:36.524328 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.528657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.528694 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.528703 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.528721 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.528732 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: E1205 06:47:36.546692 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.551174 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.551216 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.551230 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.551251 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.551266 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: E1205 06:47:36.565844 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.574584 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.574736 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.574852 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.574988 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.575105 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: E1205 06:47:36.588921 4780 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T06:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3e37a7b-0024-4e60-b062-627e3948945a\\\",\\\"systemUUID\\\":\\\"c3fe25e1-e381-4010-89fb-f17e2c9cc29f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T06:47:36Z is after 2025-08-24T17:21:41Z" Dec 05 06:47:36 crc kubenswrapper[4780]: E1205 06:47:36.589305 4780 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.591298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.591435 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.591522 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.591604 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.591689 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.693906 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.693950 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.693961 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.693975 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.693986 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.796398 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.796440 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.796451 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.796468 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.796516 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.898811 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.898859 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.898918 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.898943 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:36 crc kubenswrapper[4780]: I1205 06:47:36.898961 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:36Z","lastTransitionTime":"2025-12-05T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.001369 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.001652 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.001715 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.001791 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.001864 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:37Z","lastTransitionTime":"2025-12-05T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.104284 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.104335 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.104347 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.104366 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.104379 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:37Z","lastTransitionTime":"2025-12-05T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.137782 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.137969 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.138228 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:37 crc kubenswrapper[4780]: E1205 06:47:37.138541 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:37 crc kubenswrapper[4780]: E1205 06:47:37.138960 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:37 crc kubenswrapper[4780]: E1205 06:47:37.138834 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.154553 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.206893 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.206938 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.206946 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.206960 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.206970 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:37Z","lastTransitionTime":"2025-12-05T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.309426 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.309452 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.309460 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.309472 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.309481 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:37Z","lastTransitionTime":"2025-12-05T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.412021 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.412055 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.412063 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.412075 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.412085 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:37Z","lastTransitionTime":"2025-12-05T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.514377 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.514436 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.514450 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.514471 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.514490 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:37Z","lastTransitionTime":"2025-12-05T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.617210 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.617249 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.617261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.617277 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.617289 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:37Z","lastTransitionTime":"2025-12-05T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.720095 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.720132 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.720140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.720153 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.720162 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:37Z","lastTransitionTime":"2025-12-05T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.823004 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.823064 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.823078 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.823096 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.823109 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:37Z","lastTransitionTime":"2025-12-05T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.926310 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.926355 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.926370 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.926392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:37 crc kubenswrapper[4780]: I1205 06:47:37.926407 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:37Z","lastTransitionTime":"2025-12-05T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.028921 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.028967 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.028980 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.029000 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.029013 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:38Z","lastTransitionTime":"2025-12-05T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.131644 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.131677 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.131685 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.131699 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.131709 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:38Z","lastTransitionTime":"2025-12-05T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.138088 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:38 crc kubenswrapper[4780]: E1205 06:47:38.138191 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.234233 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.234269 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.234277 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.234291 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.234300 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:38Z","lastTransitionTime":"2025-12-05T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.337261 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.337305 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.337315 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.337331 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.337341 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:38Z","lastTransitionTime":"2025-12-05T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.440130 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.440170 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.440184 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.440200 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.440214 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:38Z","lastTransitionTime":"2025-12-05T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.543098 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.543135 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.543143 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.543168 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.543176 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:38Z","lastTransitionTime":"2025-12-05T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.645812 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.645945 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.645967 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.645998 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.646210 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:38Z","lastTransitionTime":"2025-12-05T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.748379 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.748968 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.749014 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.749045 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.749092 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:38Z","lastTransitionTime":"2025-12-05T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.852060 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.852120 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.852133 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.852148 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.852156 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:38Z","lastTransitionTime":"2025-12-05T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.955250 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.955294 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.955307 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.955323 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:38 crc kubenswrapper[4780]: I1205 06:47:38.955335 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:38Z","lastTransitionTime":"2025-12-05T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.058017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.058061 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.058072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.058088 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.058103 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:39Z","lastTransitionTime":"2025-12-05T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.138265 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.138341 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.138351 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:39 crc kubenswrapper[4780]: E1205 06:47:39.138503 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:39 crc kubenswrapper[4780]: E1205 06:47:39.138668 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:39 crc kubenswrapper[4780]: E1205 06:47:39.138823 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.160652 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.160728 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.160743 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.160759 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.160771 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:39Z","lastTransitionTime":"2025-12-05T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.263699 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.263756 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.263797 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.263821 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.263837 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:39Z","lastTransitionTime":"2025-12-05T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.365990 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.366094 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.366117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.366145 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.366166 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:39Z","lastTransitionTime":"2025-12-05T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.468820 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.468894 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.468907 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.468922 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.468932 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:39Z","lastTransitionTime":"2025-12-05T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.572070 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.572110 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.572122 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.572140 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.572151 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:39Z","lastTransitionTime":"2025-12-05T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.674989 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.675025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.675035 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.675051 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.675064 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:39Z","lastTransitionTime":"2025-12-05T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.776919 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.776972 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.776984 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.777001 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.777327 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:39Z","lastTransitionTime":"2025-12-05T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.879560 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.879591 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.879599 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.879615 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.879624 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:39Z","lastTransitionTime":"2025-12-05T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.982497 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.982534 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.982549 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.982566 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:39 crc kubenswrapper[4780]: I1205 06:47:39.982576 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:39Z","lastTransitionTime":"2025-12-05T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.085416 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.085492 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.085521 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.085551 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.085575 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:40Z","lastTransitionTime":"2025-12-05T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.138709 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:40 crc kubenswrapper[4780]: E1205 06:47:40.139029 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.189141 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.189198 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.189217 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.189245 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.189265 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:40Z","lastTransitionTime":"2025-12-05T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.291681 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.291779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.291799 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.292229 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.292267 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:40Z","lastTransitionTime":"2025-12-05T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.394409 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.394465 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.394480 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.394501 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.394516 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:40Z","lastTransitionTime":"2025-12-05T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.497339 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.497387 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.497398 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.497416 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.497429 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:40Z","lastTransitionTime":"2025-12-05T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.600665 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.600717 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.600730 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.600750 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.600763 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:40Z","lastTransitionTime":"2025-12-05T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.703609 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.703687 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.703704 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.703724 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.703736 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:40Z","lastTransitionTime":"2025-12-05T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.805734 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.805776 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.805784 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.805797 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.805807 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:40Z","lastTransitionTime":"2025-12-05T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.908572 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.908636 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.908654 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.908677 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:40 crc kubenswrapper[4780]: I1205 06:47:40.908695 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:40Z","lastTransitionTime":"2025-12-05T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.011809 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.011870 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.011912 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.011936 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.011954 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:41Z","lastTransitionTime":"2025-12-05T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.114024 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.114102 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.114125 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.114154 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.114177 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:41Z","lastTransitionTime":"2025-12-05T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.137840 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.137936 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.137846 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:41 crc kubenswrapper[4780]: E1205 06:47:41.138357 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:41 crc kubenswrapper[4780]: E1205 06:47:41.138511 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:41 crc kubenswrapper[4780]: E1205 06:47:41.138610 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.217109 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.217161 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.217182 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.217204 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.217222 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:41Z","lastTransitionTime":"2025-12-05T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.320473 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.320518 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.320527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.320540 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.320550 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:41Z","lastTransitionTime":"2025-12-05T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.428378 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.428427 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.428440 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.428458 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.428471 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:41Z","lastTransitionTime":"2025-12-05T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.531298 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.531354 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.531370 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.531392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.531409 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:41Z","lastTransitionTime":"2025-12-05T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.634658 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.634701 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.634709 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.634722 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.634731 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:41Z","lastTransitionTime":"2025-12-05T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.736598 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.736639 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.736648 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.736667 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.736677 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:41Z","lastTransitionTime":"2025-12-05T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.838736 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.838776 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.838786 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.838800 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.838810 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:41Z","lastTransitionTime":"2025-12-05T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.942761 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.942811 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.942819 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.942832 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:41 crc kubenswrapper[4780]: I1205 06:47:41.942841 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:41Z","lastTransitionTime":"2025-12-05T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.044862 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.044927 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.044937 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.044954 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.044963 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:42Z","lastTransitionTime":"2025-12-05T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.138222 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:42 crc kubenswrapper[4780]: E1205 06:47:42.138338 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.147251 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.147406 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.147472 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.147545 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.147642 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:42Z","lastTransitionTime":"2025-12-05T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.250056 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.250093 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.250102 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.250115 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.250123 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:42Z","lastTransitionTime":"2025-12-05T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.351963 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.351995 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.352002 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.352017 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.352027 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:42Z","lastTransitionTime":"2025-12-05T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.454646 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.454719 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.454742 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.454768 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.454788 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:42Z","lastTransitionTime":"2025-12-05T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.557813 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.557859 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.557908 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.557930 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.557945 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:42Z","lastTransitionTime":"2025-12-05T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.660589 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.660669 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.660692 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.660724 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.660745 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:42Z","lastTransitionTime":"2025-12-05T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.763007 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.763112 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.763131 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.763153 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.763168 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:42Z","lastTransitionTime":"2025-12-05T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.865499 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.865546 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.865559 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.865579 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.865591 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:42Z","lastTransitionTime":"2025-12-05T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.967812 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.967858 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.967869 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.967900 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:42 crc kubenswrapper[4780]: I1205 06:47:42.967912 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:42Z","lastTransitionTime":"2025-12-05T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.069978 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.070039 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.070051 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.070067 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.070078 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:43Z","lastTransitionTime":"2025-12-05T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.137697 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.137788 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:43 crc kubenswrapper[4780]: E1205 06:47:43.137851 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:43 crc kubenswrapper[4780]: E1205 06:47:43.137955 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.137788 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:43 crc kubenswrapper[4780]: E1205 06:47:43.138039 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.172072 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.172127 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.172151 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.172179 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.172199 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:43Z","lastTransitionTime":"2025-12-05T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.275311 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.275353 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.275365 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.275382 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.275394 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:43Z","lastTransitionTime":"2025-12-05T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.313803 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:43 crc kubenswrapper[4780]: E1205 06:47:43.313956 4780 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:47:43 crc kubenswrapper[4780]: E1205 06:47:43.314007 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs podName:c29a8f3d-4c29-4bfe-a8ab-6d28970106be nodeName:}" failed. No retries permitted until 2025-12-05 06:48:47.313993025 +0000 UTC m=+161.383509357 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs") pod "network-metrics-daemon-zkjck" (UID: "c29a8f3d-4c29-4bfe-a8ab-6d28970106be") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.377774 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.377830 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.377849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.377922 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.377954 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:43Z","lastTransitionTime":"2025-12-05T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.480617 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.480673 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.480690 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.480713 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.480731 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:43Z","lastTransitionTime":"2025-12-05T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.583328 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.583367 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.583376 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.583390 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.583402 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:43Z","lastTransitionTime":"2025-12-05T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.686117 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.686171 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.686183 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.686201 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.686213 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:43Z","lastTransitionTime":"2025-12-05T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.788684 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.788727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.788737 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.788752 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.788761 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:43Z","lastTransitionTime":"2025-12-05T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.890914 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.890956 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.890967 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.890983 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.890996 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:43Z","lastTransitionTime":"2025-12-05T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.993034 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.993074 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.993086 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.993101 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:43 crc kubenswrapper[4780]: I1205 06:47:43.993112 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:43Z","lastTransitionTime":"2025-12-05T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.095039 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.095087 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.095098 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.095114 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.095125 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:44Z","lastTransitionTime":"2025-12-05T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.138239 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:44 crc kubenswrapper[4780]: E1205 06:47:44.138448 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.196730 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.196779 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.196791 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.196808 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.196823 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:44Z","lastTransitionTime":"2025-12-05T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.299431 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.299467 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.299476 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.299488 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.299500 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:44Z","lastTransitionTime":"2025-12-05T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.403217 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.403285 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.403304 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.403330 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.403348 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:44Z","lastTransitionTime":"2025-12-05T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.505333 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.505392 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.505408 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.505434 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.505453 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:44Z","lastTransitionTime":"2025-12-05T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.607991 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.608023 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.608034 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.608051 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.608063 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:44Z","lastTransitionTime":"2025-12-05T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.710673 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.710716 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.710727 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.710742 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.710754 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:44Z","lastTransitionTime":"2025-12-05T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.813306 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.813363 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.813380 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.813405 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.813425 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:44Z","lastTransitionTime":"2025-12-05T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.915695 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.915734 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.915744 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.915760 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:44 crc kubenswrapper[4780]: I1205 06:47:44.915771 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:44Z","lastTransitionTime":"2025-12-05T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.017849 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.018001 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.018020 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.018039 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.018051 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:45Z","lastTransitionTime":"2025-12-05T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.120343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.120527 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.120657 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.120729 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.120792 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:45Z","lastTransitionTime":"2025-12-05T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.137681 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.137781 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.137715 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:45 crc kubenswrapper[4780]: E1205 06:47:45.137994 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:45 crc kubenswrapper[4780]: E1205 06:47:45.137927 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:45 crc kubenswrapper[4780]: E1205 06:47:45.138047 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.222770 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.222805 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.222816 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.222831 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.222842 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:45Z","lastTransitionTime":"2025-12-05T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.325372 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.325417 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.325430 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.325448 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.325460 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:45Z","lastTransitionTime":"2025-12-05T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.427290 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.427329 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.427343 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.427357 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.427366 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:45Z","lastTransitionTime":"2025-12-05T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.529365 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.529401 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.529411 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.529425 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.529435 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:45Z","lastTransitionTime":"2025-12-05T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.630847 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.630924 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.630944 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.630964 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.630977 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:45Z","lastTransitionTime":"2025-12-05T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.733766 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.734025 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.734092 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.734158 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.734225 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:45Z","lastTransitionTime":"2025-12-05T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.836142 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.836180 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.836190 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.836206 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.836216 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:45Z","lastTransitionTime":"2025-12-05T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.938616 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.938680 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.938698 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.938726 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:45 crc kubenswrapper[4780]: I1205 06:47:45.938742 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:45Z","lastTransitionTime":"2025-12-05T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.040351 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.040385 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.040394 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.040408 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.040418 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:46Z","lastTransitionTime":"2025-12-05T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.137791 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:46 crc kubenswrapper[4780]: E1205 06:47:46.137982 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.142008 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.142107 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.142125 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.142138 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.142146 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:46Z","lastTransitionTime":"2025-12-05T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.169390 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podStartSLOduration=81.169370598 podStartE2EDuration="1m21.169370598s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.157735365 +0000 UTC m=+100.227251697" watchObservedRunningTime="2025-12-05 06:47:46.169370598 +0000 UTC m=+100.238886920" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.183796 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.183006808 podStartE2EDuration="1m21.183006808s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.182925156 +0000 UTC m=+100.252441508" watchObservedRunningTime="2025-12-05 06:47:46.183006808 +0000 UTC m=+100.252523140" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.184094 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bwf64" podStartSLOduration=81.184087438 podStartE2EDuration="1m21.184087438s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.169360198 +0000 UTC m=+100.238876540" watchObservedRunningTime="2025-12-05 06:47:46.184087438 +0000 UTC m=+100.253603770" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.198685 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.198666383 podStartE2EDuration="1m20.198666383s" podCreationTimestamp="2025-12-05 06:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.197913093 +0000 UTC m=+100.267429435" watchObservedRunningTime="2025-12-05 06:47:46.198666383 +0000 UTC m=+100.268182715" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.207434 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.207417586 podStartE2EDuration="31.207417586s" podCreationTimestamp="2025-12-05 06:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.206915453 +0000 UTC m=+100.276431795" watchObservedRunningTime="2025-12-05 06:47:46.207417586 +0000 UTC m=+100.276933918" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.245772 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.245822 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.245837 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.245855 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.245867 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:46Z","lastTransitionTime":"2025-12-05T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.274627 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.274602805 podStartE2EDuration="9.274602805s" podCreationTimestamp="2025-12-05 06:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.255449093 +0000 UTC m=+100.324965455" watchObservedRunningTime="2025-12-05 06:47:46.274602805 +0000 UTC m=+100.344119147" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.293764 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-frbm8" podStartSLOduration=82.293743037 podStartE2EDuration="1m22.293743037s" podCreationTimestamp="2025-12-05 06:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.293650475 +0000 UTC m=+100.363166807" watchObservedRunningTime="2025-12-05 06:47:46.293743037 +0000 UTC m=+100.363259369" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.347963 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.348636 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.348747 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.348928 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.349097 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:46Z","lastTransitionTime":"2025-12-05T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.353584 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tdkql" podStartSLOduration=81.353567742 podStartE2EDuration="1m21.353567742s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.351593626 +0000 UTC m=+100.421109968" watchObservedRunningTime="2025-12-05 06:47:46.353567742 +0000 UTC m=+100.423084094" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.367772 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-scs78" podStartSLOduration=81.367751066 podStartE2EDuration="1m21.367751066s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.367400196 +0000 UTC m=+100.436916538" watchObservedRunningTime="2025-12-05 06:47:46.367751066 +0000 UTC m=+100.437267398" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.378569 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-j76x7" podStartSLOduration=81.378552877 podStartE2EDuration="1m21.378552877s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.378528366 +0000 UTC m=+100.448044698" watchObservedRunningTime="2025-12-05 06:47:46.378552877 +0000 UTC m=+100.448069209" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.389607 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.389589614 podStartE2EDuration="45.389589614s" podCreationTimestamp="2025-12-05 06:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:46.389514551 +0000 UTC m=+100.459030893" watchObservedRunningTime="2025-12-05 06:47:46.389589614 +0000 UTC m=+100.459105946" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.450981 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.451015 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.451024 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.451039 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.451048 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:46Z","lastTransitionTime":"2025-12-05T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.553718 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.553755 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.553766 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.553784 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.553795 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:46Z","lastTransitionTime":"2025-12-05T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.680635 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.680671 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.680679 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.680692 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.680701 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:46Z","lastTransitionTime":"2025-12-05T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.784304 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.784351 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.784361 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.784377 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.784386 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:46Z","lastTransitionTime":"2025-12-05T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.887153 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.887186 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.887196 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.887210 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.887223 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:46Z","lastTransitionTime":"2025-12-05T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.952583 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.952652 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.952675 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.952707 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 06:47:46 crc kubenswrapper[4780]: I1205 06:47:46.952730 4780 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T06:47:46Z","lastTransitionTime":"2025-12-05T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.002081 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9"] Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.002445 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.004722 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.005648 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.006794 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.007655 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.138540 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.138599 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:47 crc kubenswrapper[4780]: E1205 06:47:47.138672 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:47 crc kubenswrapper[4780]: E1205 06:47:47.138769 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.139051 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:47 crc kubenswrapper[4780]: E1205 06:47:47.139205 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.147039 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1d05954e-fad9-4622-af45-0903d0b43960-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.147091 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1d05954e-fad9-4622-af45-0903d0b43960-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.147121 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d05954e-fad9-4622-af45-0903d0b43960-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.147143 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d05954e-fad9-4622-af45-0903d0b43960-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.147158 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d05954e-fad9-4622-af45-0903d0b43960-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.248191 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1d05954e-fad9-4622-af45-0903d0b43960-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.248472 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d05954e-fad9-4622-af45-0903d0b43960-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.248608 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d05954e-fad9-4622-af45-0903d0b43960-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.248710 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d05954e-fad9-4622-af45-0903d0b43960-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.248841 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1d05954e-fad9-4622-af45-0903d0b43960-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.248264 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1d05954e-fad9-4622-af45-0903d0b43960-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.249011 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1d05954e-fad9-4622-af45-0903d0b43960-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.249418 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d05954e-fad9-4622-af45-0903d0b43960-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.253389 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d05954e-fad9-4622-af45-0903d0b43960-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.265495 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d05954e-fad9-4622-af45-0903d0b43960-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6dvj9\" (UID: \"1d05954e-fad9-4622-af45-0903d0b43960\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.319455 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" Dec 05 06:47:47 crc kubenswrapper[4780]: W1205 06:47:47.331972 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d05954e_fad9_4622_af45_0903d0b43960.slice/crio-82ccae5e81e123a5c2a97e808a2575ac45d1c5289b38ca5b15bff0ebbe7c7121 WatchSource:0}: Error finding container 82ccae5e81e123a5c2a97e808a2575ac45d1c5289b38ca5b15bff0ebbe7c7121: Status 404 returned error can't find the container with id 82ccae5e81e123a5c2a97e808a2575ac45d1c5289b38ca5b15bff0ebbe7c7121 Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.653037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" event={"ID":"1d05954e-fad9-4622-af45-0903d0b43960","Type":"ContainerStarted","Data":"afed53c9b3dd96e459b8802009db20a0ec830b66743e0908ab4cdc33936eafc8"} Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.653094 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" event={"ID":"1d05954e-fad9-4622-af45-0903d0b43960","Type":"ContainerStarted","Data":"82ccae5e81e123a5c2a97e808a2575ac45d1c5289b38ca5b15bff0ebbe7c7121"} Dec 05 06:47:47 crc kubenswrapper[4780]: I1205 06:47:47.668802 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6dvj9" podStartSLOduration=82.668782049 podStartE2EDuration="1m22.668782049s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:47:47.666329731 +0000 UTC m=+101.735846083" watchObservedRunningTime="2025-12-05 06:47:47.668782049 +0000 UTC m=+101.738298381" Dec 05 06:47:48 crc kubenswrapper[4780]: I1205 06:47:48.138943 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:48 crc kubenswrapper[4780]: E1205 06:47:48.139126 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:49 crc kubenswrapper[4780]: I1205 06:47:49.137701 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:49 crc kubenswrapper[4780]: I1205 06:47:49.137756 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:49 crc kubenswrapper[4780]: I1205 06:47:49.137720 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:49 crc kubenswrapper[4780]: E1205 06:47:49.138320 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:49 crc kubenswrapper[4780]: E1205 06:47:49.138442 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:49 crc kubenswrapper[4780]: I1205 06:47:49.138470 4780 scope.go:117] "RemoveContainer" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:47:49 crc kubenswrapper[4780]: E1205 06:47:49.138560 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:49 crc kubenswrapper[4780]: E1205 06:47:49.138612 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lf5cd_openshift-ovn-kubernetes(61c4a70b-17c4-4f09-a541-5161825c4c03)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" Dec 05 06:47:50 crc kubenswrapper[4780]: I1205 06:47:50.138309 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:50 crc kubenswrapper[4780]: E1205 06:47:50.138439 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:51 crc kubenswrapper[4780]: I1205 06:47:51.138048 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:51 crc kubenswrapper[4780]: I1205 06:47:51.138101 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:51 crc kubenswrapper[4780]: I1205 06:47:51.138121 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:51 crc kubenswrapper[4780]: E1205 06:47:51.138288 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:51 crc kubenswrapper[4780]: E1205 06:47:51.138348 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:51 crc kubenswrapper[4780]: E1205 06:47:51.138223 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:52 crc kubenswrapper[4780]: I1205 06:47:52.138611 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:52 crc kubenswrapper[4780]: E1205 06:47:52.138713 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:53 crc kubenswrapper[4780]: I1205 06:47:53.137900 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:53 crc kubenswrapper[4780]: I1205 06:47:53.137940 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:53 crc kubenswrapper[4780]: E1205 06:47:53.138011 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:53 crc kubenswrapper[4780]: I1205 06:47:53.138102 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:53 crc kubenswrapper[4780]: E1205 06:47:53.138424 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:53 crc kubenswrapper[4780]: E1205 06:47:53.138640 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:54 crc kubenswrapper[4780]: I1205 06:47:54.138408 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:54 crc kubenswrapper[4780]: E1205 06:47:54.138578 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:55 crc kubenswrapper[4780]: I1205 06:47:55.138078 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:55 crc kubenswrapper[4780]: I1205 06:47:55.138138 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:55 crc kubenswrapper[4780]: I1205 06:47:55.138149 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:55 crc kubenswrapper[4780]: E1205 06:47:55.138227 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:55 crc kubenswrapper[4780]: E1205 06:47:55.138281 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:55 crc kubenswrapper[4780]: E1205 06:47:55.138356 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:56 crc kubenswrapper[4780]: I1205 06:47:56.138364 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:56 crc kubenswrapper[4780]: E1205 06:47:56.139563 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:57 crc kubenswrapper[4780]: I1205 06:47:57.137967 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:57 crc kubenswrapper[4780]: I1205 06:47:57.138281 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:57 crc kubenswrapper[4780]: E1205 06:47:57.138356 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:57 crc kubenswrapper[4780]: I1205 06:47:57.138443 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:57 crc kubenswrapper[4780]: E1205 06:47:57.138564 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:57 crc kubenswrapper[4780]: E1205 06:47:57.139180 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:58 crc kubenswrapper[4780]: I1205 06:47:58.137744 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:47:58 crc kubenswrapper[4780]: E1205 06:47:58.138204 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:47:58 crc kubenswrapper[4780]: I1205 06:47:58.681297 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bwf64_74991823-72ec-4b41-bb63-e92307688c30/kube-multus/1.log" Dec 05 06:47:58 crc kubenswrapper[4780]: I1205 06:47:58.681824 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bwf64_74991823-72ec-4b41-bb63-e92307688c30/kube-multus/0.log" Dec 05 06:47:58 crc kubenswrapper[4780]: I1205 06:47:58.681896 4780 generic.go:334] "Generic (PLEG): container finished" podID="74991823-72ec-4b41-bb63-e92307688c30" containerID="ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a" exitCode=1 Dec 05 06:47:58 crc kubenswrapper[4780]: I1205 06:47:58.681923 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bwf64" event={"ID":"74991823-72ec-4b41-bb63-e92307688c30","Type":"ContainerDied","Data":"ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a"} Dec 05 06:47:58 crc kubenswrapper[4780]: I1205 06:47:58.681959 4780 scope.go:117] "RemoveContainer" containerID="13ecba891faeeaa626f493f2177164d2376e71c020e1703be6dc75f9bba23e70" Dec 05 06:47:58 crc kubenswrapper[4780]: I1205 06:47:58.682678 4780 scope.go:117] "RemoveContainer" containerID="ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a" Dec 05 06:47:58 crc kubenswrapper[4780]: E1205 06:47:58.683052 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-bwf64_openshift-multus(74991823-72ec-4b41-bb63-e92307688c30)\"" pod="openshift-multus/multus-bwf64" podUID="74991823-72ec-4b41-bb63-e92307688c30" Dec 05 06:47:59 crc kubenswrapper[4780]: I1205 06:47:59.137825 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:47:59 crc kubenswrapper[4780]: E1205 06:47:59.137972 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:47:59 crc kubenswrapper[4780]: I1205 06:47:59.137839 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:47:59 crc kubenswrapper[4780]: E1205 06:47:59.138038 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:47:59 crc kubenswrapper[4780]: I1205 06:47:59.137825 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:47:59 crc kubenswrapper[4780]: E1205 06:47:59.138098 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:47:59 crc kubenswrapper[4780]: I1205 06:47:59.686353 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bwf64_74991823-72ec-4b41-bb63-e92307688c30/kube-multus/1.log" Dec 05 06:48:00 crc kubenswrapper[4780]: I1205 06:48:00.137984 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:00 crc kubenswrapper[4780]: E1205 06:48:00.138149 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:48:01 crc kubenswrapper[4780]: I1205 06:48:01.138076 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:01 crc kubenswrapper[4780]: I1205 06:48:01.138079 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:01 crc kubenswrapper[4780]: E1205 06:48:01.138306 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:48:01 crc kubenswrapper[4780]: I1205 06:48:01.138124 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:01 crc kubenswrapper[4780]: E1205 06:48:01.138394 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:48:01 crc kubenswrapper[4780]: E1205 06:48:01.138660 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:48:02 crc kubenswrapper[4780]: I1205 06:48:02.138235 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:02 crc kubenswrapper[4780]: E1205 06:48:02.138361 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:48:02 crc kubenswrapper[4780]: I1205 06:48:02.139642 4780 scope.go:117] "RemoveContainer" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:48:02 crc kubenswrapper[4780]: I1205 06:48:02.695446 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/3.log" Dec 05 06:48:02 crc kubenswrapper[4780]: I1205 06:48:02.698069 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerStarted","Data":"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b"} Dec 05 06:48:02 crc kubenswrapper[4780]: I1205 06:48:02.698517 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:48:02 crc kubenswrapper[4780]: I1205 06:48:02.728393 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podStartSLOduration=97.728375277 podStartE2EDuration="1m37.728375277s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:02.727369418 +0000 UTC m=+116.796885760" watchObservedRunningTime="2025-12-05 06:48:02.728375277 +0000 UTC m=+116.797891609" Dec 05 06:48:02 crc kubenswrapper[4780]: I1205 06:48:02.923295 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zkjck"] Dec 05 06:48:02 crc kubenswrapper[4780]: I1205 06:48:02.923420 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:02 crc kubenswrapper[4780]: E1205 06:48:02.923517 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:48:03 crc kubenswrapper[4780]: I1205 06:48:03.138553 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:03 crc kubenswrapper[4780]: I1205 06:48:03.138658 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:03 crc kubenswrapper[4780]: E1205 06:48:03.138918 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:48:03 crc kubenswrapper[4780]: E1205 06:48:03.139001 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:48:04 crc kubenswrapper[4780]: I1205 06:48:04.138070 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:04 crc kubenswrapper[4780]: E1205 06:48:04.138263 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:48:05 crc kubenswrapper[4780]: I1205 06:48:05.138152 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:05 crc kubenswrapper[4780]: I1205 06:48:05.138152 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:05 crc kubenswrapper[4780]: E1205 06:48:05.138987 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:48:05 crc kubenswrapper[4780]: E1205 06:48:05.139091 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:48:05 crc kubenswrapper[4780]: I1205 06:48:05.138167 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:05 crc kubenswrapper[4780]: E1205 06:48:05.139498 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:48:06 crc kubenswrapper[4780]: E1205 06:48:06.104309 4780 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 05 06:48:06 crc kubenswrapper[4780]: I1205 06:48:06.138687 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:06 crc kubenswrapper[4780]: E1205 06:48:06.141650 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:48:06 crc kubenswrapper[4780]: E1205 06:48:06.220573 4780 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 06:48:07 crc kubenswrapper[4780]: I1205 06:48:07.138500 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:07 crc kubenswrapper[4780]: I1205 06:48:07.138595 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:07 crc kubenswrapper[4780]: E1205 06:48:07.138659 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:48:07 crc kubenswrapper[4780]: I1205 06:48:07.138705 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:07 crc kubenswrapper[4780]: E1205 06:48:07.138785 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:48:07 crc kubenswrapper[4780]: E1205 06:48:07.138944 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:48:08 crc kubenswrapper[4780]: I1205 06:48:08.138521 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:08 crc kubenswrapper[4780]: E1205 06:48:08.138652 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:48:09 crc kubenswrapper[4780]: I1205 06:48:09.138545 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:09 crc kubenswrapper[4780]: E1205 06:48:09.139123 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:48:09 crc kubenswrapper[4780]: I1205 06:48:09.138703 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:09 crc kubenswrapper[4780]: E1205 06:48:09.139797 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:48:09 crc kubenswrapper[4780]: I1205 06:48:09.138675 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:09 crc kubenswrapper[4780]: E1205 06:48:09.140150 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:48:10 crc kubenswrapper[4780]: I1205 06:48:10.138751 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:10 crc kubenswrapper[4780]: E1205 06:48:10.139113 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:48:11 crc kubenswrapper[4780]: I1205 06:48:11.137647 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:11 crc kubenswrapper[4780]: E1205 06:48:11.137769 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:48:11 crc kubenswrapper[4780]: I1205 06:48:11.137647 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:11 crc kubenswrapper[4780]: E1205 06:48:11.137853 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:48:11 crc kubenswrapper[4780]: I1205 06:48:11.138495 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:11 crc kubenswrapper[4780]: E1205 06:48:11.138616 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:48:11 crc kubenswrapper[4780]: E1205 06:48:11.221947 4780 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 06:48:12 crc kubenswrapper[4780]: I1205 06:48:12.137696 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:12 crc kubenswrapper[4780]: E1205 06:48:12.137806 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:48:13 crc kubenswrapper[4780]: I1205 06:48:13.137986 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:13 crc kubenswrapper[4780]: I1205 06:48:13.138080 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:13 crc kubenswrapper[4780]: E1205 06:48:13.138125 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:48:13 crc kubenswrapper[4780]: I1205 06:48:13.138183 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:13 crc kubenswrapper[4780]: E1205 06:48:13.138219 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:48:13 crc kubenswrapper[4780]: E1205 06:48:13.138375 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:48:14 crc kubenswrapper[4780]: I1205 06:48:14.140059 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:14 crc kubenswrapper[4780]: E1205 06:48:14.140209 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:48:14 crc kubenswrapper[4780]: I1205 06:48:14.140533 4780 scope.go:117] "RemoveContainer" containerID="ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a" Dec 05 06:48:14 crc kubenswrapper[4780]: I1205 06:48:14.736118 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bwf64_74991823-72ec-4b41-bb63-e92307688c30/kube-multus/1.log" Dec 05 06:48:14 crc kubenswrapper[4780]: I1205 06:48:14.736545 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bwf64" event={"ID":"74991823-72ec-4b41-bb63-e92307688c30","Type":"ContainerStarted","Data":"cfe468bd75622ef6b9a05d131f22ba9378c87151c68cb9be64e2dca88782ff9a"} Dec 05 06:48:15 crc kubenswrapper[4780]: I1205 06:48:15.138382 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:15 crc kubenswrapper[4780]: I1205 06:48:15.138431 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:15 crc kubenswrapper[4780]: I1205 06:48:15.138431 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:15 crc kubenswrapper[4780]: E1205 06:48:15.138571 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-zkjck" podUID="c29a8f3d-4c29-4bfe-a8ab-6d28970106be" Dec 05 06:48:15 crc kubenswrapper[4780]: E1205 06:48:15.138679 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 06:48:15 crc kubenswrapper[4780]: E1205 06:48:15.138803 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 06:48:16 crc kubenswrapper[4780]: I1205 06:48:16.138100 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:16 crc kubenswrapper[4780]: E1205 06:48:16.138922 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.138710 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.138720 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.138723 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.141204 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.141216 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.141231 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.141314 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.141845 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.141932 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.436507 4780 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.474292 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tjqz5"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.475172 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.477344 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.478410 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.478410 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.479985 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.480250 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.480506 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.481077 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.481969 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.482434 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6qccq"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.483141 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.486744 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-577gd"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.487224 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.489264 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.489603 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.490389 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.490825 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.491426 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.491555 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmn88"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.492077 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-sdctn"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.492403 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.492512 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sdctn" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.492778 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.493013 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.493043 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.493205 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.493428 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.493623 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.493507 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.494603 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.494743 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.495028 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.495036 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.496842 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-trqhd"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.497651 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.513557 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.514166 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.514477 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.517209 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.519846 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.520080 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.520169 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.530482 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.530628 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.530633 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.530776 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.530936 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531015 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531160 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531347 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531472 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531572 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531682 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531788 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531951 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531708 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531699 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.532071 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.531743 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.532179 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.532249 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.532304 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.532328 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.532935 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.533061 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.533116 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.534052 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5xj2l"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.534873 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.535089 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-27sld"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.536080 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q4mtd"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.536188 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-27sld" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.536653 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.545126 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.545860 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.547038 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.547659 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550223 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881d3d6f-e692-4c33-b3fd-8bdba759d80d-serving-cert\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550270 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x2jff"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550449 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-trusted-ca\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550634 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fda723a-14f8-4fda-950f-fb95725c78ad-auth-proxy-config\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550657 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550687 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a83a70c0-d58c-498b-bce0-b8823ff40526-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550703 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550718 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phnrf\" (UniqueName: \"kubernetes.io/projected/a83a70c0-d58c-498b-bce0-b8823ff40526-kube-api-access-phnrf\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550735 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-serving-cert\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550752 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2hh\" (UniqueName: \"kubernetes.io/projected/d32f2abc-ca84-43ff-bee2-65a7cecff5d2-kube-api-access-mp2hh\") pod \"downloads-7954f5f757-sdctn\" (UID: \"d32f2abc-ca84-43ff-bee2-65a7cecff5d2\") " pod="openshift-console/downloads-7954f5f757-sdctn" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550768 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0299e11c-ff9b-4b45-826b-5289efbfbef8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550792 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0fda723a-14f8-4fda-950f-fb95725c78ad-machine-approver-tls\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550804 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-mw286"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550811 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-dir\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550950 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.550979 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551018 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a83a70c0-d58c-498b-bce0-b8823ff40526-encryption-config\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551047 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-policies\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551073 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551096 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgp5s\" (UniqueName: \"kubernetes.io/projected/0299e11c-ff9b-4b45-826b-5289efbfbef8-kube-api-access-xgp5s\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551119 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551101 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551143 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551490 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a83a70c0-d58c-498b-bce0-b8823ff40526-audit-dir\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551585 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9www\" (UniqueName: \"kubernetes.io/projected/881d3d6f-e692-4c33-b3fd-8bdba759d80d-kube-api-access-w9www\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551666 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nfks\" (UniqueName: \"kubernetes.io/projected/0fda723a-14f8-4fda-950f-fb95725c78ad-kube-api-access-9nfks\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551819 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.551925 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/608368ed-ece7-45a1-b13d-50ede7867c1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-trqhd\" (UID: \"608368ed-ece7-45a1-b13d-50ede7867c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552006 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0299e11c-ff9b-4b45-826b-5289efbfbef8-config\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552077 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqqkw\" (UniqueName: \"kubernetes.io/projected/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-kube-api-access-xqqkw\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552148 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a83a70c0-d58c-498b-bce0-b8823ff40526-audit-policies\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552230 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a83a70c0-d58c-498b-bce0-b8823ff40526-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552298 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608368ed-ece7-45a1-b13d-50ede7867c1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-trqhd\" (UID: \"608368ed-ece7-45a1-b13d-50ede7867c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552381 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552463 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-config\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552537 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr26h\" (UniqueName: \"kubernetes.io/projected/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-kube-api-access-mr26h\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552622 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-config\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552695 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-client-ca\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552769 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fda723a-14f8-4fda-950f-fb95725c78ad-config\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.552837 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f29td\" (UniqueName: \"kubernetes.io/projected/608368ed-ece7-45a1-b13d-50ede7867c1a-kube-api-access-f29td\") pod \"openshift-config-operator-7777fb866f-trqhd\" (UID: \"608368ed-ece7-45a1-b13d-50ede7867c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.553086 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.553164 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a83a70c0-d58c-498b-bce0-b8823ff40526-etcd-client\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.553230 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.553299 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0299e11c-ff9b-4b45-826b-5289efbfbef8-images\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.553373 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83a70c0-d58c-498b-bce0-b8823ff40526-serving-cert\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.553453 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.553835 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.555091 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.555905 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hcl4v"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.556094 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.556189 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.556463 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.556643 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.555307 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.556861 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.557519 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.557760 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.558092 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.559533 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.560314 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.560797 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.563285 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.563824 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.563861 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.563903 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.565235 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.565311 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.563896 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.566936 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.567465 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.567479 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.567873 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.568130 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.568551 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.573554 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.573864 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.586361 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.587209 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.587587 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.589546 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.589972 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.590663 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.597467 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.597691 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.598706 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-krv7k"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.599408 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.599650 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.599794 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.600033 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.600111 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.600123 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.600226 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.600292 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.600378 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.600469 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.600507 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.601739 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.601900 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.601954 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.602032 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.602075 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.602161 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.604096 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.604575 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.605582 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.606376 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.606516 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.606704 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.606810 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.606824 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.607095 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.608722 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.610408 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.610637 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.611478 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.612501 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.612656 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.615782 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.618458 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.618827 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.618926 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.629246 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-94sr5"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.629937 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.637593 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.638572 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.639078 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.639599 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rh794"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.640253 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.640802 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-577gd"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.641797 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.642724 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.643197 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.643719 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.644197 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.644712 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.645149 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.646082 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.646563 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.646799 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.647407 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.648143 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.649106 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.649688 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-trqhd"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.650834 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87cmc"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.651709 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.652625 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.653133 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658160 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658192 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a83a70c0-d58c-498b-bce0-b8823ff40526-etcd-client\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658217 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658240 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0299e11c-ff9b-4b45-826b-5289efbfbef8-images\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658262 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83a70c0-d58c-498b-bce0-b8823ff40526-serving-cert\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658279 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658308 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881d3d6f-e692-4c33-b3fd-8bdba759d80d-serving-cert\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658328 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-trusted-ca\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658353 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658383 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fda723a-14f8-4fda-950f-fb95725c78ad-auth-proxy-config\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658403 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a83a70c0-d58c-498b-bce0-b8823ff40526-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658422 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-serving-cert\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658440 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658461 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phnrf\" (UniqueName: \"kubernetes.io/projected/a83a70c0-d58c-498b-bce0-b8823ff40526-kube-api-access-phnrf\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658485 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp2hh\" (UniqueName: \"kubernetes.io/projected/d32f2abc-ca84-43ff-bee2-65a7cecff5d2-kube-api-access-mp2hh\") pod \"downloads-7954f5f757-sdctn\" (UID: \"d32f2abc-ca84-43ff-bee2-65a7cecff5d2\") " pod="openshift-console/downloads-7954f5f757-sdctn" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658506 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0299e11c-ff9b-4b45-826b-5289efbfbef8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658535 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0fda723a-14f8-4fda-950f-fb95725c78ad-machine-approver-tls\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658586 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-dir\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658607 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658626 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a83a70c0-d58c-498b-bce0-b8823ff40526-encryption-config\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658645 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-policies\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.658664 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgp5s\" (UniqueName: \"kubernetes.io/projected/0299e11c-ff9b-4b45-826b-5289efbfbef8-kube-api-access-xgp5s\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.661346 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a83a70c0-d58c-498b-bce0-b8823ff40526-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.662147 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-trusted-ca\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.662185 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fda723a-14f8-4fda-950f-fb95725c78ad-auth-proxy-config\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663529 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663583 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663613 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663644 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a83a70c0-d58c-498b-bce0-b8823ff40526-audit-dir\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663677 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9www\" (UniqueName: \"kubernetes.io/projected/881d3d6f-e692-4c33-b3fd-8bdba759d80d-kube-api-access-w9www\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663699 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663746 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nfks\" (UniqueName: \"kubernetes.io/projected/0fda723a-14f8-4fda-950f-fb95725c78ad-kube-api-access-9nfks\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663773 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663791 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0299e11c-ff9b-4b45-826b-5289efbfbef8-config\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663820 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqqkw\" (UniqueName: \"kubernetes.io/projected/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-kube-api-access-xqqkw\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663848 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/608368ed-ece7-45a1-b13d-50ede7867c1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-trqhd\" (UID: \"608368ed-ece7-45a1-b13d-50ede7867c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663905 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a83a70c0-d58c-498b-bce0-b8823ff40526-audit-policies\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663937 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a83a70c0-d58c-498b-bce0-b8823ff40526-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663968 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608368ed-ece7-45a1-b13d-50ede7867c1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-trqhd\" (UID: \"608368ed-ece7-45a1-b13d-50ede7867c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.663993 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.664047 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-config\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.664078 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr26h\" (UniqueName: \"kubernetes.io/projected/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-kube-api-access-mr26h\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.664911 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-client-ca\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.664946 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-config\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.664944 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zkrh4"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.664975 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fda723a-14f8-4fda-950f-fb95725c78ad-config\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.665000 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f29td\" (UniqueName: \"kubernetes.io/projected/608368ed-ece7-45a1-b13d-50ede7867c1a-kube-api-access-f29td\") pod \"openshift-config-operator-7777fb866f-trqhd\" (UID: \"608368ed-ece7-45a1-b13d-50ede7867c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.665343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-dir\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.666438 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.671805 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0fda723a-14f8-4fda-950f-fb95725c78ad-machine-approver-tls\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.672203 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.673048 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.673386 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0299e11c-ff9b-4b45-826b-5289efbfbef8-config\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.673406 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.673723 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/608368ed-ece7-45a1-b13d-50ede7867c1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-trqhd\" (UID: \"608368ed-ece7-45a1-b13d-50ede7867c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.674232 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a83a70c0-d58c-498b-bce0-b8823ff40526-etcd-client\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.674549 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a83a70c0-d58c-498b-bce0-b8823ff40526-serving-cert\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.674657 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a83a70c0-d58c-498b-bce0-b8823ff40526-audit-policies\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.685205 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0299e11c-ff9b-4b45-826b-5289efbfbef8-images\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.685962 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-config\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.686889 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.687144 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.687308 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a83a70c0-d58c-498b-bce0-b8823ff40526-audit-dir\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.687605 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fda723a-14f8-4fda-950f-fb95725c78ad-config\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.688520 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.688975 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a83a70c0-d58c-498b-bce0-b8823ff40526-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.691048 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.691617 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a83a70c0-d58c-498b-bce0-b8823ff40526-encryption-config\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.666948 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-policies\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.692300 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-config\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.692324 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-74bc7"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.692326 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.692538 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.692597 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.692749 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881d3d6f-e692-4c33-b3fd-8bdba759d80d-serving-cert\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.693014 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.693130 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-74bc7" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.693206 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.693599 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-client-ca\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.693861 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.694329 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-serving-cert\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.694982 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.695160 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9rr45"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.696175 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x2jff"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.696259 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.696868 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.697098 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.697860 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6qccq"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.698897 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608368ed-ece7-45a1-b13d-50ede7867c1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-trqhd\" (UID: \"608368ed-ece7-45a1-b13d-50ede7867c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.701575 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.701612 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-27sld"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.701623 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-94sr5"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.703941 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.704224 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mw286"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.705768 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0299e11c-ff9b-4b45-826b-5289efbfbef8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.706601 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.707734 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmn88"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.708993 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.710125 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5xj2l"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.711326 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q4mtd"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.713252 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.715021 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-74bc7"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.716865 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wgk69"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.717486 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.718112 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.718493 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-thgz6"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.719510 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.720313 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.722120 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tjqz5"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.723486 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zkrh4"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.725012 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.726097 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87cmc"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.727658 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.729091 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.730324 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rh794"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.731465 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.732651 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.734115 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.735604 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hcl4v"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.736799 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.737169 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.738050 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sdctn"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.739433 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.740922 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.742346 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.743699 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9rr45"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.744821 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.746138 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.747697 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wgk69"] Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.757250 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.777189 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.797561 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.818522 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.838687 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.857790 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.877839 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.917107 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.937837 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.958336 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.982498 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 05 06:48:17 crc kubenswrapper[4780]: I1205 06:48:17.996909 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.017808 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.038457 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.058678 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.078033 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.097488 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.117200 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.137628 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.137813 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.158021 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.177046 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.197870 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.217975 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.237127 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.257276 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.277515 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.298168 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.317496 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.337980 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.357611 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.377155 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.397363 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.417208 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.438150 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.457206 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.477629 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.517538 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.538055 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.557909 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.577347 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.598328 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.618185 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.638450 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.656466 4780 request.go:700] Waited for 1.016004187s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0 Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.658114 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.678010 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.697625 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.717713 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.737858 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.758405 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.777618 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.797731 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.818311 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.838534 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.857820 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.879351 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.897421 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.917714 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.938414 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.957831 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.977577 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 06:48:18 crc kubenswrapper[4780]: I1205 06:48:18.998443 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.018512 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.047494 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.058260 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.079289 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.097068 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.118429 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.138384 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.158784 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.178703 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.228231 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f29td\" (UniqueName: \"kubernetes.io/projected/608368ed-ece7-45a1-b13d-50ede7867c1a-kube-api-access-f29td\") pod \"openshift-config-operator-7777fb866f-trqhd\" (UID: \"608368ed-ece7-45a1-b13d-50ede7867c1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.235478 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgp5s\" (UniqueName: \"kubernetes.io/projected/0299e11c-ff9b-4b45-826b-5289efbfbef8-kube-api-access-xgp5s\") pod \"machine-api-operator-5694c8668f-tjqz5\" (UID: \"0299e11c-ff9b-4b45-826b-5289efbfbef8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.266430 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phnrf\" (UniqueName: \"kubernetes.io/projected/a83a70c0-d58c-498b-bce0-b8823ff40526-kube-api-access-phnrf\") pod \"apiserver-7bbb656c7d-x6gqb\" (UID: \"a83a70c0-d58c-498b-bce0-b8823ff40526\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.287656 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nfks\" (UniqueName: \"kubernetes.io/projected/0fda723a-14f8-4fda-950f-fb95725c78ad-kube-api-access-9nfks\") pod \"machine-approver-56656f9798-wrx8h\" (UID: \"0fda723a-14f8-4fda-950f-fb95725c78ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.308561 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqqkw\" (UniqueName: \"kubernetes.io/projected/14a6c1d4-82cc-4cac-b7c3-4a875e8399b4-kube-api-access-xqqkw\") pod \"console-operator-58897d9998-6qccq\" (UID: \"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4\") " pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.317025 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp2hh\" (UniqueName: \"kubernetes.io/projected/d32f2abc-ca84-43ff-bee2-65a7cecff5d2-kube-api-access-mp2hh\") pod \"downloads-7954f5f757-sdctn\" (UID: \"d32f2abc-ca84-43ff-bee2-65a7cecff5d2\") " pod="openshift-console/downloads-7954f5f757-sdctn" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.334220 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.338018 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.344780 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr26h\" (UniqueName: \"kubernetes.io/projected/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-kube-api-access-mr26h\") pod \"oauth-openshift-558db77b4-fmn88\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.350668 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.357359 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.378484 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.387084 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.398236 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.419793 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.422019 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sdctn" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.430349 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.482854 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.483306 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.487864 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.488215 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.497686 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.501685 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9www\" (UniqueName: \"kubernetes.io/projected/881d3d6f-e692-4c33-b3fd-8bdba759d80d-kube-api-access-w9www\") pod \"controller-manager-879f6c89f-577gd\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.519439 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.537770 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.557386 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.578209 4780 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.600337 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.617954 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.638158 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.657688 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.671696 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sdctn"] Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.676560 4780 request.go:700] Waited for 1.9568198s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.678571 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 05 06:48:19 crc kubenswrapper[4780]: W1205 06:48:19.682814 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd32f2abc_ca84_43ff_bee2_65a7cecff5d2.slice/crio-ac9458f84b0cd3bb4e2b5f40321951a1bc38130e33b798b4df1b46775faffdb8 WatchSource:0}: Error finding container ac9458f84b0cd3bb4e2b5f40321951a1bc38130e33b798b4df1b46775faffdb8: Status 404 returned error can't find the container with id ac9458f84b0cd3bb4e2b5f40321951a1bc38130e33b798b4df1b46775faffdb8 Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.697354 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.707047 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.755741 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sdctn" event={"ID":"d32f2abc-ca84-43ff-bee2-65a7cecff5d2","Type":"ContainerStarted","Data":"ac9458f84b0cd3bb4e2b5f40321951a1bc38130e33b798b4df1b46775faffdb8"} Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.757504 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" event={"ID":"0fda723a-14f8-4fda-950f-fb95725c78ad","Type":"ContainerStarted","Data":"9213e77f3986b3d93c0375ff216f6145cc9b21260bb52c519c15598e79fc8d0a"} Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.757525 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" event={"ID":"0fda723a-14f8-4fda-950f-fb95725c78ad","Type":"ContainerStarted","Data":"a36cc09dae186ad3e0547f00244791f351d9c3d41e84dfb0def13d98840558f1"} Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785293 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pgwq6\" (UID: \"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785321 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-config\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785338 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn2q5\" (UniqueName: \"kubernetes.io/projected/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-kube-api-access-kn2q5\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785470 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btd2g\" (UniqueName: \"kubernetes.io/projected/8fd7fc79-44f5-4c00-8897-962ce4018e34-kube-api-access-btd2g\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785516 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql4kl\" (UniqueName: \"kubernetes.io/projected/5bbb474c-dfea-4d7f-802a-efa7e15d8595-kube-api-access-ql4kl\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785531 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4bb7d7c-826d-4157-970e-c4d195647287-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ljf89\" (UID: \"b4bb7d7c-826d-4157-970e-c4d195647287\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785547 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-certificates\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785565 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-bound-sa-token\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785594 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlc8\" (UniqueName: \"kubernetes.io/projected/7861d984-72f7-44e0-8d42-fb04a7d2000e-kube-api-access-mqlc8\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785610 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/009edd4d-dcfb-4a88-a93e-dbd2430403c1-service-ca-bundle\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785635 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7drjz\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-kube-api-access-7drjz\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785649 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-audit\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785675 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50e81137-0d77-4028-9a46-600476de40b0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9lsfp\" (UID: \"50e81137-0d77-4028-9a46-600476de40b0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785691 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e-metrics-tls\") pod \"dns-operator-744455d44c-27sld\" (UID: \"e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-27sld" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785716 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhqrm\" (UniqueName: \"kubernetes.io/projected/e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e-kube-api-access-hhqrm\") pod \"dns-operator-744455d44c-27sld\" (UID: \"e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-27sld" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785731 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fd7fc79-44f5-4c00-8897-962ce4018e34-service-ca-bundle\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785747 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckcgs\" (UniqueName: \"kubernetes.io/projected/009edd4d-dcfb-4a88-a93e-dbd2430403c1-kube-api-access-ckcgs\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785777 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a485e21-6ee8-4849-92ab-9ec0e8b0aa35-config\") pod \"kube-apiserver-operator-766d6c64bb-q7nnk\" (UID: \"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785793 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0714a1-e162-4459-953a-bf44f6433301-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xmmsj\" (UID: \"cf0714a1-e162-4459-953a-bf44f6433301\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785818 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-tls\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785836 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd7fc79-44f5-4c00-8897-962ce4018e34-metrics-certs\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785868 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-node-pullsecrets\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785898 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-etcd-client\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785922 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5bbb474c-dfea-4d7f-802a-efa7e15d8595-metrics-tls\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785936 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785951 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrd22\" (UniqueName: \"kubernetes.io/projected/6d76cd6b-eda9-4487-8665-8e99b372fa38-kube-api-access-hrd22\") pod \"multus-admission-controller-857f4d67dd-94sr5\" (UID: \"6d76cd6b-eda9-4487-8665-8e99b372fa38\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785966 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c32f1f57-3c18-4efc-be09-e9abfef22c52-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4qnzs\" (UID: \"c32f1f57-3c18-4efc-be09-e9abfef22c52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785982 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6d76cd6b-eda9-4487-8665-8e99b372fa38-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-94sr5\" (UID: \"6d76cd6b-eda9-4487-8665-8e99b372fa38\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.785996 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-service-ca\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786022 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/471abb3f-f9ef-454a-8f00-87c4846e59f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7mwk\" (UID: \"471abb3f-f9ef-454a-8f00-87c4846e59f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786080 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e81137-0d77-4028-9a46-600476de40b0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9lsfp\" (UID: \"50e81137-0d77-4028-9a46-600476de40b0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786095 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009edd4d-dcfb-4a88-a93e-dbd2430403c1-config\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786118 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786159 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-etcd-serving-ca\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786174 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/009edd4d-dcfb-4a88-a93e-dbd2430403c1-serving-cert\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786195 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a485e21-6ee8-4849-92ab-9ec0e8b0aa35-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-q7nnk\" (UID: \"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786235 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p97z\" (UniqueName: \"kubernetes.io/projected/20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76-kube-api-access-7p97z\") pod \"openshift-apiserver-operator-796bbdcf4f-pgwq6\" (UID: \"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786259 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786279 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75s9h\" (UniqueName: \"kubernetes.io/projected/471abb3f-f9ef-454a-8f00-87c4846e59f2-kube-api-access-75s9h\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7mwk\" (UID: \"471abb3f-f9ef-454a-8f00-87c4846e59f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786293 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8ba9f40-589c-4f71-9eff-6fee943bea65-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pcmq9\" (UID: \"d8ba9f40-589c-4f71-9eff-6fee943bea65\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786308 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pgwq6\" (UID: \"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786322 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c32f1f57-3c18-4efc-be09-e9abfef22c52-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4qnzs\" (UID: \"c32f1f57-3c18-4efc-be09-e9abfef22c52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786361 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-image-import-ca\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786386 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k49z\" (UniqueName: \"kubernetes.io/projected/b802ab76-8dbe-4ddd-8704-be862bfb7598-kube-api-access-8k49z\") pod \"migrator-59844c95c7-kdc8g\" (UID: \"b802ab76-8dbe-4ddd-8704-be862bfb7598\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786428 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxvz\" (UniqueName: \"kubernetes.io/projected/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-kube-api-access-lxxvz\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786444 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-trusted-ca-bundle\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786464 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/009edd4d-dcfb-4a88-a93e-dbd2430403c1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786487 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4df77495-3e2c-4c13-823f-f217f0dcb8f5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786511 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50e81137-0d77-4028-9a46-600476de40b0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9lsfp\" (UID: \"50e81137-0d77-4028-9a46-600476de40b0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786526 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-etcd-service-ca\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786540 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf9r6\" (UniqueName: \"kubernetes.io/projected/c32f1f57-3c18-4efc-be09-e9abfef22c52-kube-api-access-pf9r6\") pod \"openshift-controller-manager-operator-756b6f6bc6-4qnzs\" (UID: \"c32f1f57-3c18-4efc-be09-e9abfef22c52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786557 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4df77495-3e2c-4c13-823f-f217f0dcb8f5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786572 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-trusted-ca\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786586 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-encryption-config\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786601 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-etcd-client\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786616 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czm22\" (UniqueName: \"kubernetes.io/projected/b4bb7d7c-826d-4157-970e-c4d195647287-kube-api-access-czm22\") pod \"cluster-samples-operator-665b6dd947-ljf89\" (UID: \"b4bb7d7c-826d-4157-970e-c4d195647287\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786659 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5bbb474c-dfea-4d7f-802a-efa7e15d8595-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786689 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8fd7fc79-44f5-4c00-8897-962ce4018e34-stats-auth\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786749 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-config\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786765 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0714a1-e162-4459-953a-bf44f6433301-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xmmsj\" (UID: \"cf0714a1-e162-4459-953a-bf44f6433301\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786793 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786807 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-serving-cert\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786821 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-audit-dir\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786851 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-serving-cert\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786867 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-oauth-config\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786899 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnjw7\" (UniqueName: \"kubernetes.io/projected/4df77495-3e2c-4c13-823f-f217f0dcb8f5-kube-api-access-qnjw7\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786935 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bbb474c-dfea-4d7f-802a-efa7e15d8595-trusted-ca\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.786950 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0714a1-e162-4459-953a-bf44f6433301-config\") pod \"kube-controller-manager-operator-78b949d7b-xmmsj\" (UID: \"cf0714a1-e162-4459-953a-bf44f6433301\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.787371 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8fd7fc79-44f5-4c00-8897-962ce4018e34-default-certificate\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.787390 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpkd2\" (UniqueName: \"kubernetes.io/projected/d8ba9f40-589c-4f71-9eff-6fee943bea65-kube-api-access-qpkd2\") pod \"machine-config-controller-84d6567774-pcmq9\" (UID: \"d8ba9f40-589c-4f71-9eff-6fee943bea65\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.787408 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a485e21-6ee8-4849-92ab-9ec0e8b0aa35-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-q7nnk\" (UID: \"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.787425 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d8ba9f40-589c-4f71-9eff-6fee943bea65-proxy-tls\") pod \"machine-config-controller-84d6567774-pcmq9\" (UID: \"d8ba9f40-589c-4f71-9eff-6fee943bea65\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.787440 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-etcd-ca\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.787456 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-oauth-serving-cert\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.787471 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-config\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.787499 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4df77495-3e2c-4c13-823f-f217f0dcb8f5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.787514 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-serving-cert\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: E1205 06:48:19.788753 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.288739293 +0000 UTC m=+134.358255625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.818542 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-tjqz5"] Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.822264 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb"] Dec 05 06:48:19 crc kubenswrapper[4780]: W1205 06:48:19.831249 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda83a70c0_d58c_498b_bce0_b8823ff40526.slice/crio-cf2bdf47f9cd776480d259a5a69c972410f1980f7502f629a152a48664c7cf99 WatchSource:0}: Error finding container cf2bdf47f9cd776480d259a5a69c972410f1980f7502f629a152a48664c7cf99: Status 404 returned error can't find the container with id cf2bdf47f9cd776480d259a5a69c972410f1980f7502f629a152a48664c7cf99 Dec 05 06:48:19 crc kubenswrapper[4780]: W1205 06:48:19.834548 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0299e11c_ff9b_4b45_826b_5289efbfbef8.slice/crio-9445a33236f764468e27b596ecf221d2fc9137c6c6b21e96dd868079c13e2e42 WatchSource:0}: Error finding container 9445a33236f764468e27b596ecf221d2fc9137c6c6b21e96dd868079c13e2e42: Status 404 returned error can't find the container with id 9445a33236f764468e27b596ecf221d2fc9137c6c6b21e96dd868079c13e2e42 Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.853903 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6qccq"] Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.870906 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-577gd"] Dec 05 06:48:19 crc kubenswrapper[4780]: W1205 06:48:19.883533 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod881d3d6f_e692_4c33_b3fd_8bdba759d80d.slice/crio-69bb155eb4e4779ca918273a9eb261095f0c573d7513fe55a39700ff2a25d6c8 WatchSource:0}: Error finding container 69bb155eb4e4779ca918273a9eb261095f0c573d7513fe55a39700ff2a25d6c8: Status 404 returned error can't find the container with id 69bb155eb4e4779ca918273a9eb261095f0c573d7513fe55a39700ff2a25d6c8 Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889416 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889614 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpkd2\" (UniqueName: \"kubernetes.io/projected/d8ba9f40-589c-4f71-9eff-6fee943bea65-kube-api-access-qpkd2\") pod \"machine-config-controller-84d6567774-pcmq9\" (UID: \"d8ba9f40-589c-4f71-9eff-6fee943bea65\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889651 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8fd7fc79-44f5-4c00-8897-962ce4018e34-default-certificate\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889672 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a485e21-6ee8-4849-92ab-9ec0e8b0aa35-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-q7nnk\" (UID: \"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:19 crc kubenswrapper[4780]: E1205 06:48:19.889705 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.389682299 +0000 UTC m=+134.459198631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889734 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-apiservice-cert\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889762 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-webhook-cert\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889785 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhtz6\" (UniqueName: \"kubernetes.io/projected/9f0f238a-6517-4eae-b2b6-26bb4d01bb4c-kube-api-access-qhtz6\") pod \"service-ca-9c57cc56f-zkrh4\" (UID: \"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889809 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d8ba9f40-589c-4f71-9eff-6fee943bea65-proxy-tls\") pod \"machine-config-controller-84d6567774-pcmq9\" (UID: \"d8ba9f40-589c-4f71-9eff-6fee943bea65\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889828 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-etcd-ca\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889844 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-oauth-serving-cert\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889914 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-config\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889935 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/065d59a1-845a-4a35-8f55-6e550e259a33-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889959 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh699\" (UniqueName: \"kubernetes.io/projected/6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde-kube-api-access-gh699\") pod \"kube-storage-version-migrator-operator-b67b599dd-z2lhl\" (UID: \"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.889984 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-config\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890007 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-config-volume\") pod \"collect-profiles-29415285-24x6s\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890027 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-plugins-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890053 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4df77495-3e2c-4c13-823f-f217f0dcb8f5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890075 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-serving-cert\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890095 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75f13827-e915-43a8-a59e-5aba80a424c1-config-volume\") pod \"dns-default-wgk69\" (UID: \"75f13827-e915-43a8-a59e-5aba80a424c1\") " pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890115 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pgwq6\" (UID: \"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890135 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-config\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890157 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn2q5\" (UniqueName: \"kubernetes.io/projected/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-kube-api-access-kn2q5\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890177 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4bb7d7c-826d-4157-970e-c4d195647287-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ljf89\" (UID: \"b4bb7d7c-826d-4157-970e-c4d195647287\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890198 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btd2g\" (UniqueName: \"kubernetes.io/projected/8fd7fc79-44f5-4c00-8897-962ce4018e34-kube-api-access-btd2g\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890218 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql4kl\" (UniqueName: \"kubernetes.io/projected/5bbb474c-dfea-4d7f-802a-efa7e15d8595-kube-api-access-ql4kl\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890238 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-certificates\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890257 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-bound-sa-token\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890280 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-serving-cert\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890302 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z2lhl\" (UID: \"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890324 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7drjz\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-kube-api-access-7drjz\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890346 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqlc8\" (UniqueName: \"kubernetes.io/projected/7861d984-72f7-44e0-8d42-fb04a7d2000e-kube-api-access-mqlc8\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890370 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/009edd4d-dcfb-4a88-a93e-dbd2430403c1-service-ca-bundle\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890396 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z2lhl\" (UID: \"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890446 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-audit\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890466 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-87cmc\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890482 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5p9\" (UniqueName: \"kubernetes.io/projected/7be48601-40b4-49d8-a6ed-3a0a9de0b668-kube-api-access-vp5p9\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890507 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50e81137-0d77-4028-9a46-600476de40b0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9lsfp\" (UID: \"50e81137-0d77-4028-9a46-600476de40b0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890523 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e-metrics-tls\") pod \"dns-operator-744455d44c-27sld\" (UID: \"e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-27sld" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890539 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhqrm\" (UniqueName: \"kubernetes.io/projected/e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e-kube-api-access-hhqrm\") pod \"dns-operator-744455d44c-27sld\" (UID: \"e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-27sld" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890555 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fd7fc79-44f5-4c00-8897-962ce4018e34-service-ca-bundle\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890572 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckcgs\" (UniqueName: \"kubernetes.io/projected/009edd4d-dcfb-4a88-a93e-dbd2430403c1-kube-api-access-ckcgs\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890588 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-tmpfs\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890611 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/abbfbd0d-14f1-496a-b029-1c2f66929e11-certs\") pod \"machine-config-server-thgz6\" (UID: \"abbfbd0d-14f1-496a-b029-1c2f66929e11\") " pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.890632 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a485e21-6ee8-4849-92ab-9ec0e8b0aa35-config\") pod \"kube-apiserver-operator-766d6c64bb-q7nnk\" (UID: \"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.891329 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/009edd4d-dcfb-4a88-a93e-dbd2430403c1-service-ca-bundle\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.891786 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fd7fc79-44f5-4c00-8897-962ce4018e34-service-ca-bundle\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892207 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-certificates\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892517 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a485e21-6ee8-4849-92ab-9ec0e8b0aa35-config\") pod \"kube-apiserver-operator-766d6c64bb-q7nnk\" (UID: \"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892619 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pgwq6\" (UID: \"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892705 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-config\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892783 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-config\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892836 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0714a1-e162-4459-953a-bf44f6433301-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xmmsj\" (UID: \"cf0714a1-e162-4459-953a-bf44f6433301\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892860 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-etcd-client\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892897 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-node-pullsecrets\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892917 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-tls\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892918 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-audit\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892967 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd7fc79-44f5-4c00-8897-962ce4018e34-metrics-certs\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892984 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-node-pullsecrets\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.892992 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-87cmc\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893041 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9dg\" (UniqueName: \"kubernetes.io/projected/abbfbd0d-14f1-496a-b029-1c2f66929e11-kube-api-access-8m9dg\") pod \"machine-config-server-thgz6\" (UID: \"abbfbd0d-14f1-496a-b029-1c2f66929e11\") " pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893056 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-etcd-ca\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893449 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893478 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrd22\" (UniqueName: \"kubernetes.io/projected/6d76cd6b-eda9-4487-8665-8e99b372fa38-kube-api-access-hrd22\") pod \"multus-admission-controller-857f4d67dd-94sr5\" (UID: \"6d76cd6b-eda9-4487-8665-8e99b372fa38\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893502 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5bbb474c-dfea-4d7f-802a-efa7e15d8595-metrics-tls\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893520 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c32f1f57-3c18-4efc-be09-e9abfef22c52-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4qnzs\" (UID: \"c32f1f57-3c18-4efc-be09-e9abfef22c52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893536 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6d76cd6b-eda9-4487-8665-8e99b372fa38-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-94sr5\" (UID: \"6d76cd6b-eda9-4487-8665-8e99b372fa38\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893617 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-service-ca\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893637 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/72b5be1d-5e75-4797-9891-8b3de8cc6a7f-srv-cert\") pod \"olm-operator-6b444d44fb-dhv68\" (UID: \"72b5be1d-5e75-4797-9891-8b3de8cc6a7f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893664 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l299\" (UniqueName: \"kubernetes.io/projected/216c07e5-bfea-47f3-9b08-b8f7e6c8177d-kube-api-access-2l299\") pod \"ingress-canary-74bc7\" (UID: \"216c07e5-bfea-47f3-9b08-b8f7e6c8177d\") " pod="openshift-ingress-canary/ingress-canary-74bc7" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.893683 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f-srv-cert\") pod \"catalog-operator-68c6474976-ffg6g\" (UID: \"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.894184 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-oauth-serving-cert\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.894486 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-service-ca\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.894509 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/471abb3f-f9ef-454a-8f00-87c4846e59f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7mwk\" (UID: \"471abb3f-f9ef-454a-8f00-87c4846e59f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.894570 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/065d59a1-845a-4a35-8f55-6e550e259a33-images\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.894934 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4df77495-3e2c-4c13-823f-f217f0dcb8f5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.895090 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/72b5be1d-5e75-4797-9891-8b3de8cc6a7f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dhv68\" (UID: \"72b5be1d-5e75-4797-9891-8b3de8cc6a7f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.895420 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e81137-0d77-4028-9a46-600476de40b0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9lsfp\" (UID: \"50e81137-0d77-4028-9a46-600476de40b0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.895499 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/065d59a1-845a-4a35-8f55-6e550e259a33-proxy-tls\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.895529 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-secret-volume\") pod \"collect-profiles-29415285-24x6s\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.895553 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/abbfbd0d-14f1-496a-b029-1c2f66929e11-node-bootstrap-token\") pod \"machine-config-server-thgz6\" (UID: \"abbfbd0d-14f1-496a-b029-1c2f66929e11\") " pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.896077 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.896188 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.896280 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009edd4d-dcfb-4a88-a93e-dbd2430403c1-config\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.896386 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50e81137-0d77-4028-9a46-600476de40b0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9lsfp\" (UID: \"50e81137-0d77-4028-9a46-600476de40b0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.896420 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-client-ca\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.902201 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/216c07e5-bfea-47f3-9b08-b8f7e6c8177d-cert\") pod \"ingress-canary-74bc7\" (UID: \"216c07e5-bfea-47f3-9b08-b8f7e6c8177d\") " pod="openshift-ingress-canary/ingress-canary-74bc7" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.902741 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-tls\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.902914 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0714a1-e162-4459-953a-bf44f6433301-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xmmsj\" (UID: \"cf0714a1-e162-4459-953a-bf44f6433301\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.902998 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8fd7fc79-44f5-4c00-8897-962ce4018e34-default-certificate\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.903259 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d8ba9f40-589c-4f71-9eff-6fee943bea65-proxy-tls\") pod \"machine-config-controller-84d6567774-pcmq9\" (UID: \"d8ba9f40-589c-4f71-9eff-6fee943bea65\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.903342 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fd7fc79-44f5-4c00-8897-962ce4018e34-metrics-certs\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.903528 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5bbb474c-dfea-4d7f-802a-efa7e15d8595-metrics-tls\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:19 crc kubenswrapper[4780]: E1205 06:48:19.903691 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.403669068 +0000 UTC m=+134.473185390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.903865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9f0f238a-6517-4eae-b2b6-26bb4d01bb4c-signing-key\") pod \"service-ca-9c57cc56f-zkrh4\" (UID: \"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.903949 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a485e21-6ee8-4849-92ab-9ec0e8b0aa35-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-q7nnk\" (UID: \"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.903983 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/471abb3f-f9ef-454a-8f00-87c4846e59f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7mwk\" (UID: \"471abb3f-f9ef-454a-8f00-87c4846e59f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904002 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-etcd-serving-ca\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904072 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/009edd4d-dcfb-4a88-a93e-dbd2430403c1-serving-cert\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904105 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wtd5\" (UniqueName: \"kubernetes.io/projected/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-kube-api-access-5wtd5\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904177 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gssmf\" (UniqueName: \"kubernetes.io/projected/065d59a1-845a-4a35-8f55-6e550e259a33-kube-api-access-gssmf\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904411 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-serving-cert\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904404 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-etcd-client\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904505 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9f0f238a-6517-4eae-b2b6-26bb4d01bb4c-signing-cabundle\") pod \"service-ca-9c57cc56f-zkrh4\" (UID: \"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904565 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p97z\" (UniqueName: \"kubernetes.io/projected/20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76-kube-api-access-7p97z\") pod \"openshift-apiserver-operator-796bbdcf4f-pgwq6\" (UID: \"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904614 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904660 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75s9h\" (UniqueName: \"kubernetes.io/projected/471abb3f-f9ef-454a-8f00-87c4846e59f2-kube-api-access-75s9h\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7mwk\" (UID: \"471abb3f-f9ef-454a-8f00-87c4846e59f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904688 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8ba9f40-589c-4f71-9eff-6fee943bea65-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pcmq9\" (UID: \"d8ba9f40-589c-4f71-9eff-6fee943bea65\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904708 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pgwq6\" (UID: \"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904963 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c32f1f57-3c18-4efc-be09-e9abfef22c52-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4qnzs\" (UID: \"c32f1f57-3c18-4efc-be09-e9abfef22c52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.904996 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f-profile-collector-cert\") pod \"catalog-operator-68c6474976-ffg6g\" (UID: \"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.905019 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-image-import-ca\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.905050 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-etcd-serving-ca\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.905306 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrwj\" (UniqueName: \"kubernetes.io/projected/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-kube-api-access-mgrwj\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.905397 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k49z\" (UniqueName: \"kubernetes.io/projected/b802ab76-8dbe-4ddd-8704-be862bfb7598-kube-api-access-8k49z\") pod \"migrator-59844c95c7-kdc8g\" (UID: \"b802ab76-8dbe-4ddd-8704-be862bfb7598\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.905426 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef700d08-f99c-4682-9099-ac6b8263b400-serving-cert\") pod \"service-ca-operator-777779d784-ggrnz\" (UID: \"ef700d08-f99c-4682-9099-ac6b8263b400\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.905454 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-trusted-ca-bundle\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.905825 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/009edd4d-dcfb-4a88-a93e-dbd2430403c1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.905913 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bftgg\" (UniqueName: \"kubernetes.io/projected/3b5ca6f7-6820-4010-966e-05e4cf49ba03-kube-api-access-bftgg\") pod \"marketplace-operator-79b997595-87cmc\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.905951 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxxvz\" (UniqueName: \"kubernetes.io/projected/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-kube-api-access-lxxvz\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.909415 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8ba9f40-589c-4f71-9eff-6fee943bea65-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pcmq9\" (UID: \"d8ba9f40-589c-4f71-9eff-6fee943bea65\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.907608 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c32f1f57-3c18-4efc-be09-e9abfef22c52-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4qnzs\" (UID: \"c32f1f57-3c18-4efc-be09-e9abfef22c52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.907774 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4bb7d7c-826d-4157-970e-c4d195647287-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ljf89\" (UID: \"b4bb7d7c-826d-4157-970e-c4d195647287\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.910316 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-image-import-ca\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.910516 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-mountpoint-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.907564 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/009edd4d-dcfb-4a88-a93e-dbd2430403c1-config\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.912843 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4df77495-3e2c-4c13-823f-f217f0dcb8f5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.912925 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/009edd4d-dcfb-4a88-a93e-dbd2430403c1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.912943 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzwd\" (UniqueName: \"kubernetes.io/projected/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-kube-api-access-vdzwd\") pod \"collect-profiles-29415285-24x6s\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.913215 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npbxb\" (UniqueName: \"kubernetes.io/projected/fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f-kube-api-access-npbxb\") pod \"catalog-operator-68c6474976-ffg6g\" (UID: \"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.915220 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50e81137-0d77-4028-9a46-600476de40b0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9lsfp\" (UID: \"50e81137-0d77-4028-9a46-600476de40b0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.915330 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-etcd-service-ca\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.915373 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf9r6\" (UniqueName: \"kubernetes.io/projected/c32f1f57-3c18-4efc-be09-e9abfef22c52-kube-api-access-pf9r6\") pod \"openshift-controller-manager-operator-756b6f6bc6-4qnzs\" (UID: \"c32f1f57-3c18-4efc-be09-e9abfef22c52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.915415 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/79984b4c-0cd9-465d-9bc4-fcf4d9a57196-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dllgr\" (UID: \"79984b4c-0cd9-465d-9bc4-fcf4d9a57196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.915494 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4df77495-3e2c-4c13-823f-f217f0dcb8f5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.915528 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-trusted-ca\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.915611 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-encryption-config\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.915663 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkmxp\" (UniqueName: \"kubernetes.io/projected/72b5be1d-5e75-4797-9891-8b3de8cc6a7f-kube-api-access-nkmxp\") pod \"olm-operator-6b444d44fb-dhv68\" (UID: \"72b5be1d-5e75-4797-9891-8b3de8cc6a7f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.915701 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5bbb474c-dfea-4d7f-802a-efa7e15d8595-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.916590 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-trusted-ca-bundle\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.917295 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-etcd-service-ca\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.918316 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-trusted-ca\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.918778 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-etcd-client\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.919122 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czm22\" (UniqueName: \"kubernetes.io/projected/b4bb7d7c-826d-4157-970e-c4d195647287-kube-api-access-czm22\") pod \"cluster-samples-operator-665b6dd947-ljf89\" (UID: \"b4bb7d7c-826d-4157-970e-c4d195647287\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.919360 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-socket-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.919436 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8fd7fc79-44f5-4c00-8897-962ce4018e34-stats-auth\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.919499 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-registration-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.924280 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmn88"] Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.924727 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlnlg\" (UniqueName: \"kubernetes.io/projected/ef700d08-f99c-4682-9099-ac6b8263b400-kube-api-access-rlnlg\") pod \"service-ca-operator-777779d784-ggrnz\" (UID: \"ef700d08-f99c-4682-9099-ac6b8263b400\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.924774 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75f13827-e915-43a8-a59e-5aba80a424c1-metrics-tls\") pod \"dns-default-wgk69\" (UID: \"75f13827-e915-43a8-a59e-5aba80a424c1\") " pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.924843 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0714a1-e162-4459-953a-bf44f6433301-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xmmsj\" (UID: \"cf0714a1-e162-4459-953a-bf44f6433301\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.924947 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-config\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.925164 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.925231 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-serving-cert\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.925255 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-audit-dir\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.925322 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-audit-dir\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.925560 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-config\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.925640 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnjw7\" (UniqueName: \"kubernetes.io/projected/4df77495-3e2c-4c13-823f-f217f0dcb8f5-kube-api-access-qnjw7\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.925660 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.925706 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-serving-cert\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.925825 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-oauth-config\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.925925 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqmd2\" (UniqueName: \"kubernetes.io/projected/75f13827-e915-43a8-a59e-5aba80a424c1-kube-api-access-wqmd2\") pod \"dns-default-wgk69\" (UID: \"75f13827-e915-43a8-a59e-5aba80a424c1\") " pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.926162 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgz65\" (UniqueName: \"kubernetes.io/projected/79984b4c-0cd9-465d-9bc4-fcf4d9a57196-kube-api-access-cgz65\") pod \"package-server-manager-789f6589d5-dllgr\" (UID: \"79984b4c-0cd9-465d-9bc4-fcf4d9a57196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.926259 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bbb474c-dfea-4d7f-802a-efa7e15d8595-trusted-ca\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.926653 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0714a1-e162-4459-953a-bf44f6433301-config\") pod \"kube-controller-manager-operator-78b949d7b-xmmsj\" (UID: \"cf0714a1-e162-4459-953a-bf44f6433301\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.926729 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef700d08-f99c-4682-9099-ac6b8263b400-config\") pod \"service-ca-operator-777779d784-ggrnz\" (UID: \"ef700d08-f99c-4682-9099-ac6b8263b400\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.926808 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-csi-data-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.927502 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bbb474c-dfea-4d7f-802a-efa7e15d8595-trusted-ca\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.927528 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-trqhd"] Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.927832 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf0714a1-e162-4459-953a-bf44f6433301-config\") pod \"kube-controller-manager-operator-78b949d7b-xmmsj\" (UID: \"cf0714a1-e162-4459-953a-bf44f6433301\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.928032 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e-metrics-tls\") pod \"dns-operator-744455d44c-27sld\" (UID: \"e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-27sld" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.929806 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-serving-cert\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.930004 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c32f1f57-3c18-4efc-be09-e9abfef22c52-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4qnzs\" (UID: \"c32f1f57-3c18-4efc-be09-e9abfef22c52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.930823 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.933903 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6d76cd6b-eda9-4487-8665-8e99b372fa38-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-94sr5\" (UID: \"6d76cd6b-eda9-4487-8665-8e99b372fa38\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.935673 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/009edd4d-dcfb-4a88-a93e-dbd2430403c1-serving-cert\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.932547 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a485e21-6ee8-4849-92ab-9ec0e8b0aa35-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-q7nnk\" (UID: \"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.936323 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-oauth-config\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.938409 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8fd7fc79-44f5-4c00-8897-962ce4018e34-stats-auth\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.938530 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50e81137-0d77-4028-9a46-600476de40b0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9lsfp\" (UID: \"50e81137-0d77-4028-9a46-600476de40b0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.938758 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-serving-cert\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.943299 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4df77495-3e2c-4c13-823f-f217f0dcb8f5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.944628 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-encryption-config\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.951960 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-etcd-client\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.952104 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pgwq6\" (UID: \"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.960009 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a485e21-6ee8-4849-92ab-9ec0e8b0aa35-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-q7nnk\" (UID: \"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.963364 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpkd2\" (UniqueName: \"kubernetes.io/projected/d8ba9f40-589c-4f71-9eff-6fee943bea65-kube-api-access-qpkd2\") pod \"machine-config-controller-84d6567774-pcmq9\" (UID: \"d8ba9f40-589c-4f71-9eff-6fee943bea65\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:19 crc kubenswrapper[4780]: W1205 06:48:19.972907 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4d24dd0_5bdb_4ae7_971f_8cb91aad45f5.slice/crio-f3b59590c806ab27b005de47b2d3d18f56624fba3ad710a49623190bd522e5cf WatchSource:0}: Error finding container f3b59590c806ab27b005de47b2d3d18f56624fba3ad710a49623190bd522e5cf: Status 404 returned error can't find the container with id f3b59590c806ab27b005de47b2d3d18f56624fba3ad710a49623190bd522e5cf Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.973120 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqlc8\" (UniqueName: \"kubernetes.io/projected/7861d984-72f7-44e0-8d42-fb04a7d2000e-kube-api-access-mqlc8\") pod \"console-f9d7485db-mw286\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:19 crc kubenswrapper[4780]: W1205 06:48:19.980503 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod608368ed_ece7_45a1_b13d_50ede7867c1a.slice/crio-39d1034c45b7136ac1bb68d79f3495133310b0d449d76a9172b820773658c8b4 WatchSource:0}: Error finding container 39d1034c45b7136ac1bb68d79f3495133310b0d449d76a9172b820773658c8b4: Status 404 returned error can't find the container with id 39d1034c45b7136ac1bb68d79f3495133310b0d449d76a9172b820773658c8b4 Dec 05 06:48:19 crc kubenswrapper[4780]: I1205 06:48:19.995479 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btd2g\" (UniqueName: \"kubernetes.io/projected/8fd7fc79-44f5-4c00-8897-962ce4018e34-kube-api-access-btd2g\") pod \"router-default-5444994796-krv7k\" (UID: \"8fd7fc79-44f5-4c00-8897-962ce4018e34\") " pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.010818 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-bound-sa-token\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.027733 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.027934 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.527906671 +0000 UTC m=+134.597423003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.027999 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9f0f238a-6517-4eae-b2b6-26bb4d01bb4c-signing-cabundle\") pod \"service-ca-9c57cc56f-zkrh4\" (UID: \"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028050 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f-profile-collector-cert\") pod \"catalog-operator-68c6474976-ffg6g\" (UID: \"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028071 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgrwj\" (UniqueName: \"kubernetes.io/projected/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-kube-api-access-mgrwj\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028107 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef700d08-f99c-4682-9099-ac6b8263b400-serving-cert\") pod \"service-ca-operator-777779d784-ggrnz\" (UID: \"ef700d08-f99c-4682-9099-ac6b8263b400\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028150 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bftgg\" (UniqueName: \"kubernetes.io/projected/3b5ca6f7-6820-4010-966e-05e4cf49ba03-kube-api-access-bftgg\") pod \"marketplace-operator-79b997595-87cmc\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028186 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdzwd\" (UniqueName: \"kubernetes.io/projected/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-kube-api-access-vdzwd\") pod \"collect-profiles-29415285-24x6s\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028204 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-mountpoint-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028232 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/79984b4c-0cd9-465d-9bc4-fcf4d9a57196-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dllgr\" (UID: \"79984b4c-0cd9-465d-9bc4-fcf4d9a57196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028247 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npbxb\" (UniqueName: \"kubernetes.io/projected/fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f-kube-api-access-npbxb\") pod \"catalog-operator-68c6474976-ffg6g\" (UID: \"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028284 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-socket-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028299 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkmxp\" (UniqueName: \"kubernetes.io/projected/72b5be1d-5e75-4797-9891-8b3de8cc6a7f-kube-api-access-nkmxp\") pod \"olm-operator-6b444d44fb-dhv68\" (UID: \"72b5be1d-5e75-4797-9891-8b3de8cc6a7f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028316 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-registration-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028341 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlnlg\" (UniqueName: \"kubernetes.io/projected/ef700d08-f99c-4682-9099-ac6b8263b400-kube-api-access-rlnlg\") pod \"service-ca-operator-777779d784-ggrnz\" (UID: \"ef700d08-f99c-4682-9099-ac6b8263b400\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028365 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75f13827-e915-43a8-a59e-5aba80a424c1-metrics-tls\") pod \"dns-default-wgk69\" (UID: \"75f13827-e915-43a8-a59e-5aba80a424c1\") " pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028401 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqmd2\" (UniqueName: \"kubernetes.io/projected/75f13827-e915-43a8-a59e-5aba80a424c1-kube-api-access-wqmd2\") pod \"dns-default-wgk69\" (UID: \"75f13827-e915-43a8-a59e-5aba80a424c1\") " pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028453 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgz65\" (UniqueName: \"kubernetes.io/projected/79984b4c-0cd9-465d-9bc4-fcf4d9a57196-kube-api-access-cgz65\") pod \"package-server-manager-789f6589d5-dllgr\" (UID: \"79984b4c-0cd9-465d-9bc4-fcf4d9a57196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028474 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef700d08-f99c-4682-9099-ac6b8263b400-config\") pod \"service-ca-operator-777779d784-ggrnz\" (UID: \"ef700d08-f99c-4682-9099-ac6b8263b400\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028490 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-csi-data-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028512 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-apiservice-cert\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028526 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-webhook-cert\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028540 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhtz6\" (UniqueName: \"kubernetes.io/projected/9f0f238a-6517-4eae-b2b6-26bb4d01bb4c-kube-api-access-qhtz6\") pod \"service-ca-9c57cc56f-zkrh4\" (UID: \"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028562 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/065d59a1-845a-4a35-8f55-6e550e259a33-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028577 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh699\" (UniqueName: \"kubernetes.io/projected/6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde-kube-api-access-gh699\") pod \"kube-storage-version-migrator-operator-b67b599dd-z2lhl\" (UID: \"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028591 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-config\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028609 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-config-volume\") pod \"collect-profiles-29415285-24x6s\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028624 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-plugins-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028653 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75f13827-e915-43a8-a59e-5aba80a424c1-config-volume\") pod \"dns-default-wgk69\" (UID: \"75f13827-e915-43a8-a59e-5aba80a424c1\") " pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028678 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-serving-cert\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028697 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z2lhl\" (UID: \"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028734 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z2lhl\" (UID: \"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028753 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-87cmc\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028768 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5p9\" (UniqueName: \"kubernetes.io/projected/7be48601-40b4-49d8-a6ed-3a0a9de0b668-kube-api-access-vp5p9\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028808 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-tmpfs\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028811 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9f0f238a-6517-4eae-b2b6-26bb4d01bb4c-signing-cabundle\") pod \"service-ca-9c57cc56f-zkrh4\" (UID: \"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028825 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/abbfbd0d-14f1-496a-b029-1c2f66929e11-certs\") pod \"machine-config-server-thgz6\" (UID: \"abbfbd0d-14f1-496a-b029-1c2f66929e11\") " pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028861 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-87cmc\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028898 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m9dg\" (UniqueName: \"kubernetes.io/projected/abbfbd0d-14f1-496a-b029-1c2f66929e11-kube-api-access-8m9dg\") pod \"machine-config-server-thgz6\" (UID: \"abbfbd0d-14f1-496a-b029-1c2f66929e11\") " pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028926 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/72b5be1d-5e75-4797-9891-8b3de8cc6a7f-srv-cert\") pod \"olm-operator-6b444d44fb-dhv68\" (UID: \"72b5be1d-5e75-4797-9891-8b3de8cc6a7f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028943 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l299\" (UniqueName: \"kubernetes.io/projected/216c07e5-bfea-47f3-9b08-b8f7e6c8177d-kube-api-access-2l299\") pod \"ingress-canary-74bc7\" (UID: \"216c07e5-bfea-47f3-9b08-b8f7e6c8177d\") " pod="openshift-ingress-canary/ingress-canary-74bc7" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028959 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f-srv-cert\") pod \"catalog-operator-68c6474976-ffg6g\" (UID: \"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.028975 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/065d59a1-845a-4a35-8f55-6e550e259a33-images\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029029 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/72b5be1d-5e75-4797-9891-8b3de8cc6a7f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dhv68\" (UID: \"72b5be1d-5e75-4797-9891-8b3de8cc6a7f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029046 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/065d59a1-845a-4a35-8f55-6e550e259a33-proxy-tls\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029062 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-secret-volume\") pod \"collect-profiles-29415285-24x6s\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029079 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/abbfbd0d-14f1-496a-b029-1c2f66929e11-node-bootstrap-token\") pod \"machine-config-server-thgz6\" (UID: \"abbfbd0d-14f1-496a-b029-1c2f66929e11\") " pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029097 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029114 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-client-ca\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029130 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/216c07e5-bfea-47f3-9b08-b8f7e6c8177d-cert\") pod \"ingress-canary-74bc7\" (UID: \"216c07e5-bfea-47f3-9b08-b8f7e6c8177d\") " pod="openshift-ingress-canary/ingress-canary-74bc7" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029154 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wtd5\" (UniqueName: \"kubernetes.io/projected/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-kube-api-access-5wtd5\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029173 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9f0f238a-6517-4eae-b2b6-26bb4d01bb4c-signing-key\") pod \"service-ca-9c57cc56f-zkrh4\" (UID: \"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029189 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gssmf\" (UniqueName: \"kubernetes.io/projected/065d59a1-845a-4a35-8f55-6e550e259a33-kube-api-access-gssmf\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029352 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-mountpoint-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.029625 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.529611469 +0000 UTC m=+134.599127801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.029736 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/065d59a1-845a-4a35-8f55-6e550e259a33-images\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.030255 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-registration-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.030270 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-87cmc\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.030320 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-socket-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.030386 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql4kl\" (UniqueName: \"kubernetes.io/projected/5bbb474c-dfea-4d7f-802a-efa7e15d8595-kube-api-access-ql4kl\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.030583 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-client-ca\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.031549 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-config\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.032278 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef700d08-f99c-4682-9099-ac6b8263b400-config\") pod \"service-ca-operator-777779d784-ggrnz\" (UID: \"ef700d08-f99c-4682-9099-ac6b8263b400\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.032363 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-csi-data-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.033184 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/abbfbd0d-14f1-496a-b029-1c2f66929e11-certs\") pod \"machine-config-server-thgz6\" (UID: \"abbfbd0d-14f1-496a-b029-1c2f66929e11\") " pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.033216 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f-profile-collector-cert\") pod \"catalog-operator-68c6474976-ffg6g\" (UID: \"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.033197 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7be48601-40b4-49d8-a6ed-3a0a9de0b668-plugins-dir\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.033572 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef700d08-f99c-4682-9099-ac6b8263b400-serving-cert\") pod \"service-ca-operator-777779d784-ggrnz\" (UID: \"ef700d08-f99c-4682-9099-ac6b8263b400\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.033618 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/065d59a1-845a-4a35-8f55-6e550e259a33-proxy-tls\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.033935 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z2lhl\" (UID: \"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.034169 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75f13827-e915-43a8-a59e-5aba80a424c1-config-volume\") pod \"dns-default-wgk69\" (UID: \"75f13827-e915-43a8-a59e-5aba80a424c1\") " pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.034203 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-config-volume\") pod \"collect-profiles-29415285-24x6s\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.034231 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/065d59a1-845a-4a35-8f55-6e550e259a33-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.034637 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-tmpfs\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.035232 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9f0f238a-6517-4eae-b2b6-26bb4d01bb4c-signing-key\") pod \"service-ca-9c57cc56f-zkrh4\" (UID: \"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.035530 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/216c07e5-bfea-47f3-9b08-b8f7e6c8177d-cert\") pod \"ingress-canary-74bc7\" (UID: \"216c07e5-bfea-47f3-9b08-b8f7e6c8177d\") " pod="openshift-ingress-canary/ingress-canary-74bc7" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.035573 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/72b5be1d-5e75-4797-9891-8b3de8cc6a7f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dhv68\" (UID: \"72b5be1d-5e75-4797-9891-8b3de8cc6a7f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.035720 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-webhook-cert\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.036258 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/abbfbd0d-14f1-496a-b029-1c2f66929e11-node-bootstrap-token\") pod \"machine-config-server-thgz6\" (UID: \"abbfbd0d-14f1-496a-b029-1c2f66929e11\") " pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.036444 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-serving-cert\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.037213 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/72b5be1d-5e75-4797-9891-8b3de8cc6a7f-srv-cert\") pod \"olm-operator-6b444d44fb-dhv68\" (UID: \"72b5be1d-5e75-4797-9891-8b3de8cc6a7f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.037463 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75f13827-e915-43a8-a59e-5aba80a424c1-metrics-tls\") pod \"dns-default-wgk69\" (UID: \"75f13827-e915-43a8-a59e-5aba80a424c1\") " pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.038119 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f-srv-cert\") pod \"catalog-operator-68c6474976-ffg6g\" (UID: \"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.038491 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z2lhl\" (UID: \"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.038720 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-secret-volume\") pod \"collect-profiles-29415285-24x6s\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.040332 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-apiservice-cert\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.040907 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/79984b4c-0cd9-465d-9bc4-fcf4d9a57196-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dllgr\" (UID: \"79984b4c-0cd9-465d-9bc4-fcf4d9a57196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.045193 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-87cmc\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.049931 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckcgs\" (UniqueName: \"kubernetes.io/projected/009edd4d-dcfb-4a88-a93e-dbd2430403c1-kube-api-access-ckcgs\") pod \"authentication-operator-69f744f599-x2jff\" (UID: \"009edd4d-dcfb-4a88-a93e-dbd2430403c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.089000 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7drjz\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-kube-api-access-7drjz\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.108001 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50e81137-0d77-4028-9a46-600476de40b0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9lsfp\" (UID: \"50e81137-0d77-4028-9a46-600476de40b0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.128867 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhqrm\" (UniqueName: \"kubernetes.io/projected/e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e-kube-api-access-hhqrm\") pod \"dns-operator-744455d44c-27sld\" (UID: \"e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-27sld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.130272 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.130398 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.630380541 +0000 UTC m=+134.699896873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.130498 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.130795 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.630787402 +0000 UTC m=+134.700303734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.131817 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.138818 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.149304 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrd22\" (UniqueName: \"kubernetes.io/projected/6d76cd6b-eda9-4487-8665-8e99b372fa38-kube-api-access-hrd22\") pod \"multus-admission-controller-857f4d67dd-94sr5\" (UID: \"6d76cd6b-eda9-4487-8665-8e99b372fa38\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.170619 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p97z\" (UniqueName: \"kubernetes.io/projected/20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76-kube-api-access-7p97z\") pod \"openshift-apiserver-operator-796bbdcf4f-pgwq6\" (UID: \"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.175557 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.190200 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k49z\" (UniqueName: \"kubernetes.io/projected/b802ab76-8dbe-4ddd-8704-be862bfb7598-kube-api-access-8k49z\") pod \"migrator-59844c95c7-kdc8g\" (UID: \"b802ab76-8dbe-4ddd-8704-be862bfb7598\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.190339 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.197468 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.204297 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.210925 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75s9h\" (UniqueName: \"kubernetes.io/projected/471abb3f-f9ef-454a-8f00-87c4846e59f2-kube-api-access-75s9h\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7mwk\" (UID: \"471abb3f-f9ef-454a-8f00-87c4846e59f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.212646 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.221603 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.230447 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxxvz\" (UniqueName: \"kubernetes.io/projected/4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f-kube-api-access-lxxvz\") pod \"etcd-operator-b45778765-q4mtd\" (UID: \"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.232337 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.232535 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.73251009 +0000 UTC m=+134.802026412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.232767 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.233135 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.733107756 +0000 UTC m=+134.802624088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.238377 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.241778 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn2q5\" (UniqueName: \"kubernetes.io/projected/b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5-kube-api-access-kn2q5\") pod \"apiserver-76f77b778f-5xj2l\" (UID: \"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5\") " pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.254449 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4df77495-3e2c-4c13-823f-f217f0dcb8f5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.292712 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf9r6\" (UniqueName: \"kubernetes.io/projected/c32f1f57-3c18-4efc-be09-e9abfef22c52-kube-api-access-pf9r6\") pod \"openshift-controller-manager-operator-756b6f6bc6-4qnzs\" (UID: \"c32f1f57-3c18-4efc-be09-e9abfef22c52\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.314362 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czm22\" (UniqueName: \"kubernetes.io/projected/b4bb7d7c-826d-4157-970e-c4d195647287-kube-api-access-czm22\") pod \"cluster-samples-operator-665b6dd947-ljf89\" (UID: \"b4bb7d7c-826d-4157-970e-c4d195647287\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.340240 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.340495 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.840471711 +0000 UTC m=+134.909988043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.340660 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.341021 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.841007026 +0000 UTC m=+134.910523358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.346468 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5bbb474c-dfea-4d7f-802a-efa7e15d8595-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qdkv5\" (UID: \"5bbb474c-dfea-4d7f-802a-efa7e15d8595\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.354013 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.363943 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-27sld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.370892 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0714a1-e162-4459-953a-bf44f6433301-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xmmsj\" (UID: \"cf0714a1-e162-4459-953a-bf44f6433301\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.375473 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnjw7\" (UniqueName: \"kubernetes.io/projected/4df77495-3e2c-4c13-823f-f217f0dcb8f5-kube-api-access-qnjw7\") pod \"cluster-image-registry-operator-dc59b4c8b-gfpww\" (UID: \"4df77495-3e2c-4c13-823f-f217f0dcb8f5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.399023 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgrwj\" (UniqueName: \"kubernetes.io/projected/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-kube-api-access-mgrwj\") pod \"route-controller-manager-6576b87f9c-vxxmv\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:20 crc kubenswrapper[4780]: W1205 06:48:20.412775 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fd7fc79_44f5_4c00_8897_962ce4018e34.slice/crio-d1628ccf3ce4af4d825a81a7e50e46fc10f9914d4981b3c18f26ab9f0283815d WatchSource:0}: Error finding container d1628ccf3ce4af4d825a81a7e50e46fc10f9914d4981b3c18f26ab9f0283815d: Status 404 returned error can't find the container with id d1628ccf3ce4af4d825a81a7e50e46fc10f9914d4981b3c18f26ab9f0283815d Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.416622 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npbxb\" (UniqueName: \"kubernetes.io/projected/fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f-kube-api-access-npbxb\") pod \"catalog-operator-68c6474976-ffg6g\" (UID: \"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.418726 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.425710 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.433473 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdzwd\" (UniqueName: \"kubernetes.io/projected/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-kube-api-access-vdzwd\") pod \"collect-profiles-29415285-24x6s\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.441757 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.441854 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.941837919 +0000 UTC m=+135.011354251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.442111 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.442414 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:20.942406085 +0000 UTC m=+135.011922417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.445294 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.458593 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bftgg\" (UniqueName: \"kubernetes.io/projected/3b5ca6f7-6820-4010-966e-05e4cf49ba03-kube-api-access-bftgg\") pod \"marketplace-operator-79b997595-87cmc\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.464036 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.468653 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.472490 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gssmf\" (UniqueName: \"kubernetes.io/projected/065d59a1-845a-4a35-8f55-6e550e259a33-kube-api-access-gssmf\") pod \"machine-config-operator-74547568cd-rh794\" (UID: \"065d59a1-845a-4a35-8f55-6e550e259a33\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.483854 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.512698 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkmxp\" (UniqueName: \"kubernetes.io/projected/72b5be1d-5e75-4797-9891-8b3de8cc6a7f-kube-api-access-nkmxp\") pod \"olm-operator-6b444d44fb-dhv68\" (UID: \"72b5be1d-5e75-4797-9891-8b3de8cc6a7f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.518573 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlnlg\" (UniqueName: \"kubernetes.io/projected/ef700d08-f99c-4682-9099-ac6b8263b400-kube-api-access-rlnlg\") pod \"service-ca-operator-777779d784-ggrnz\" (UID: \"ef700d08-f99c-4682-9099-ac6b8263b400\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.528478 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.536159 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wtd5\" (UniqueName: \"kubernetes.io/projected/ef3fbf26-207a-4c4c-8296-a48fcbfa9641-kube-api-access-5wtd5\") pod \"packageserver-d55dfcdfc-r2xld\" (UID: \"ef3fbf26-207a-4c4c-8296-a48fcbfa9641\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.543470 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.544048 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.544160 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.044141973 +0000 UTC m=+135.113658305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.552902 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.565079 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l299\" (UniqueName: \"kubernetes.io/projected/216c07e5-bfea-47f3-9b08-b8f7e6c8177d-kube-api-access-2l299\") pod \"ingress-canary-74bc7\" (UID: \"216c07e5-bfea-47f3-9b08-b8f7e6c8177d\") " pod="openshift-ingress-canary/ingress-canary-74bc7" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.570498 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.573387 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m9dg\" (UniqueName: \"kubernetes.io/projected/abbfbd0d-14f1-496a-b029-1c2f66929e11-kube-api-access-8m9dg\") pod \"machine-config-server-thgz6\" (UID: \"abbfbd0d-14f1-496a-b029-1c2f66929e11\") " pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.579109 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.588247 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.596142 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.602111 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqmd2\" (UniqueName: \"kubernetes.io/projected/75f13827-e915-43a8-a59e-5aba80a424c1-kube-api-access-wqmd2\") pod \"dns-default-wgk69\" (UID: \"75f13827-e915-43a8-a59e-5aba80a424c1\") " pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.610022 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.616130 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgz65\" (UniqueName: \"kubernetes.io/projected/79984b4c-0cd9-465d-9bc4-fcf4d9a57196-kube-api-access-cgz65\") pod \"package-server-manager-789f6589d5-dllgr\" (UID: \"79984b4c-0cd9-465d-9bc4-fcf4d9a57196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.617045 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.637013 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhtz6\" (UniqueName: \"kubernetes.io/projected/9f0f238a-6517-4eae-b2b6-26bb4d01bb4c-kube-api-access-qhtz6\") pod \"service-ca-9c57cc56f-zkrh4\" (UID: \"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c\") " pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.637275 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-74bc7" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.646165 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.646468 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.146456348 +0000 UTC m=+135.215972680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.671203 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.680087 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-thgz6" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.680425 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh699\" (UniqueName: \"kubernetes.io/projected/6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde-kube-api-access-gh699\") pod \"kube-storage-version-migrator-operator-b67b599dd-z2lhl\" (UID: \"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.690281 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5p9\" (UniqueName: \"kubernetes.io/projected/7be48601-40b4-49d8-a6ed-3a0a9de0b668-kube-api-access-vp5p9\") pod \"csi-hostpathplugin-9rr45\" (UID: \"7be48601-40b4-49d8-a6ed-3a0a9de0b668\") " pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.747550 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.749788 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.24976393 +0000 UTC m=+135.319280272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.771986 4780 generic.go:334] "Generic (PLEG): container finished" podID="608368ed-ece7-45a1-b13d-50ede7867c1a" containerID="574a2d4d0e5ad6088bc6cbe3e48767ed39e662db1fe8462650a0e7edc0f8a9a4" exitCode=0 Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.772273 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" event={"ID":"608368ed-ece7-45a1-b13d-50ede7867c1a","Type":"ContainerDied","Data":"574a2d4d0e5ad6088bc6cbe3e48767ed39e662db1fe8462650a0e7edc0f8a9a4"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.772321 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" event={"ID":"608368ed-ece7-45a1-b13d-50ede7867c1a","Type":"ContainerStarted","Data":"39d1034c45b7136ac1bb68d79f3495133310b0d449d76a9172b820773658c8b4"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.774547 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6qccq" event={"ID":"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4","Type":"ContainerStarted","Data":"7830b620d70e7044f59a4f84abba59a9b4c44180c57bf6252b3c4a5ef0306dd8"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.774576 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6qccq" event={"ID":"14a6c1d4-82cc-4cac-b7c3-4a875e8399b4","Type":"ContainerStarted","Data":"a8ff6a3b39eea7d362b2dbf6c91f94be9254e6e23de393254717ed8a3bfb85d8"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.774736 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.776687 4780 generic.go:334] "Generic (PLEG): container finished" podID="a83a70c0-d58c-498b-bce0-b8823ff40526" containerID="6b2fc082e7f7d1263d88e7d6573253d52db343f0618f162b10abf582199384eb" exitCode=0 Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.776838 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" event={"ID":"a83a70c0-d58c-498b-bce0-b8823ff40526","Type":"ContainerDied","Data":"6b2fc082e7f7d1263d88e7d6573253d52db343f0618f162b10abf582199384eb"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.776868 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" event={"ID":"a83a70c0-d58c-498b-bce0-b8823ff40526","Type":"ContainerStarted","Data":"cf2bdf47f9cd776480d259a5a69c972410f1980f7502f629a152a48664c7cf99"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.780435 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-krv7k" event={"ID":"8fd7fc79-44f5-4c00-8897-962ce4018e34","Type":"ContainerStarted","Data":"dd4ff060ea1d3b401127b53e4c3399d42f32f90e55e7e1920f6c4ce48fd78ede"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.780468 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-krv7k" event={"ID":"8fd7fc79-44f5-4c00-8897-962ce4018e34","Type":"ContainerStarted","Data":"d1628ccf3ce4af4d825a81a7e50e46fc10f9914d4981b3c18f26ab9f0283815d"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.803484 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" event={"ID":"881d3d6f-e692-4c33-b3fd-8bdba759d80d","Type":"ContainerStarted","Data":"69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.803527 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" event={"ID":"881d3d6f-e692-4c33-b3fd-8bdba759d80d","Type":"ContainerStarted","Data":"69bb155eb4e4779ca918273a9eb261095f0c573d7513fe55a39700ff2a25d6c8"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.804388 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.812278 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" event={"ID":"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5","Type":"ContainerStarted","Data":"f3b59590c806ab27b005de47b2d3d18f56624fba3ad710a49623190bd522e5cf"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.813340 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.824317 4780 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fmn88 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" start-of-body= Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.824361 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" podUID="e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.843896 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.858273 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sdctn" event={"ID":"d32f2abc-ca84-43ff-bee2-65a7cecff5d2","Type":"ContainerStarted","Data":"82419672fbc246a1bc529df613829c291d27b3335d4c1686dec32451f17b3387"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.859545 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sdctn" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.860610 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.861116 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.361101656 +0000 UTC m=+135.430617998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.863844 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.865817 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" event={"ID":"0299e11c-ff9b-4b45-826b-5289efbfbef8","Type":"ContainerStarted","Data":"fb13b8014228746d7b3539f962ea0bfb4a9fc5d43329f014e000425f6e6fe549"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.865977 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" event={"ID":"0299e11c-ff9b-4b45-826b-5289efbfbef8","Type":"ContainerStarted","Data":"3b0957e9dbc7952e1e1cb6b77d35b07c7ae6f98cc31514ea4b1ec9ebdcdc6096"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.866044 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" event={"ID":"0299e11c-ff9b-4b45-826b-5289efbfbef8","Type":"ContainerStarted","Data":"9445a33236f764468e27b596ecf221d2fc9137c6c6b21e96dd868079c13e2e42"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.871703 4780 patch_prober.go:28] interesting pod/downloads-7954f5f757-sdctn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.871760 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sdctn" podUID="d32f2abc-ca84-43ff-bee2-65a7cecff5d2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.878587 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" event={"ID":"0fda723a-14f8-4fda-950f-fb95725c78ad","Type":"ContainerStarted","Data":"8e92e3bbb7186b9507559006594fb78877cd6be0daccecac8ac5a1fb179b12ed"} Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.925978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.931141 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.963315 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9rr45" Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.963969 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.964197 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.464178461 +0000 UTC m=+135.533694783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:20 crc kubenswrapper[4780]: I1205 06:48:20.964483 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:20 crc kubenswrapper[4780]: E1205 06:48:20.965776 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.465767355 +0000 UTC m=+135.535283687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.068731 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:21 crc kubenswrapper[4780]: E1205 06:48:21.069991 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.569973312 +0000 UTC m=+135.639489644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.072514 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" podStartSLOduration=116.072500732 podStartE2EDuration="1m56.072500732s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:21.071178976 +0000 UTC m=+135.140695318" watchObservedRunningTime="2025-12-05 06:48:21.072500732 +0000 UTC m=+135.142017064" Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.170172 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.170221 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" podStartSLOduration=116.170205398 podStartE2EDuration="1m56.170205398s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:21.168007538 +0000 UTC m=+135.237523880" watchObservedRunningTime="2025-12-05 06:48:21.170205398 +0000 UTC m=+135.239721730" Dec 05 06:48:21 crc kubenswrapper[4780]: E1205 06:48:21.170612 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.67059687 +0000 UTC m=+135.740113202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.197690 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.273436 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:21 crc kubenswrapper[4780]: E1205 06:48:21.273770 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.773753547 +0000 UTC m=+135.843269889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.375028 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:21 crc kubenswrapper[4780]: E1205 06:48:21.375341 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.875327482 +0000 UTC m=+135.944843814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.477431 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:21 crc kubenswrapper[4780]: E1205 06:48:21.477814 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:21.97779963 +0000 UTC m=+136.047315962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.578546 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:21 crc kubenswrapper[4780]: E1205 06:48:21.578932 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:22.078919391 +0000 UTC m=+136.148435723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.679529 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:21 crc kubenswrapper[4780]: E1205 06:48:21.679926 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:22.179911539 +0000 UTC m=+136.249427871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.777236 4780 patch_prober.go:28] interesting pod/console-operator-58897d9998-6qccq container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": context deadline exceeded" start-of-body= Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.777299 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6qccq" podUID="14a6c1d4-82cc-4cac-b7c3-4a875e8399b4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": context deadline exceeded" Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.782038 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:21 crc kubenswrapper[4780]: E1205 06:48:21.782431 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:22.282417039 +0000 UTC m=+136.351933361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.803728 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-krv7k" podStartSLOduration=116.803701391 podStartE2EDuration="1m56.803701391s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:21.801462158 +0000 UTC m=+135.870978490" watchObservedRunningTime="2025-12-05 06:48:21.803701391 +0000 UTC m=+135.873217723" Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.882026 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" event={"ID":"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5","Type":"ContainerStarted","Data":"e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5"} Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.884714 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.885040 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-thgz6" event={"ID":"abbfbd0d-14f1-496a-b029-1c2f66929e11","Type":"ContainerStarted","Data":"23d9c1137ed809775a72f04587dbdb198230ebdfaa290d2e994cef3950917518"} Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.885084 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-thgz6" event={"ID":"abbfbd0d-14f1-496a-b029-1c2f66929e11","Type":"ContainerStarted","Data":"00c41564c7945c03ca40bbcef529fa902aa358e41a8e681e049c0c41419119f2"} Dec 05 06:48:21 crc kubenswrapper[4780]: E1205 06:48:21.885106 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:22.385088613 +0000 UTC m=+136.454604945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.886750 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" event={"ID":"608368ed-ece7-45a1-b13d-50ede7867c1a","Type":"ContainerStarted","Data":"2b41bdb70df5c00997a1871fd4d563eefd9ffabe6318484c97ef583a96c56962"} Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.886934 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.889852 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" event={"ID":"a83a70c0-d58c-498b-bce0-b8823ff40526","Type":"ContainerStarted","Data":"1045d4fcf969215067c4c489121a42285a984c4f811d240aa29f773f50498e33"} Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.891159 4780 patch_prober.go:28] interesting pod/downloads-7954f5f757-sdctn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.891208 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sdctn" podUID="d32f2abc-ca84-43ff-bee2-65a7cecff5d2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.926382 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:21 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:21 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:21 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.926439 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.933906 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6qccq" Dec 05 06:48:21 crc kubenswrapper[4780]: I1205 06:48:21.986922 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:21 crc kubenswrapper[4780]: E1205 06:48:21.989724 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:22.489708742 +0000 UTC m=+136.559225074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.088260 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:22 crc kubenswrapper[4780]: E1205 06:48:22.088574 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:22.58856078 +0000 UTC m=+136.658077102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.192739 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:22 crc kubenswrapper[4780]: E1205 06:48:22.193818 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:22.693805616 +0000 UTC m=+136.763321948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.209217 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:22 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:22 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:22 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.209275 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.285326 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6qccq" podStartSLOduration=117.28531059 podStartE2EDuration="1m57.28531059s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:22.247283443 +0000 UTC m=+136.316799775" watchObservedRunningTime="2025-12-05 06:48:22.28531059 +0000 UTC m=+136.354826922" Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.300413 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:22 crc kubenswrapper[4780]: E1205 06:48:22.300769 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:22.80075444 +0000 UTC m=+136.870270772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.327156 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mw286"] Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.366793 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-thgz6" podStartSLOduration=5.366773015 podStartE2EDuration="5.366773015s" podCreationTimestamp="2025-12-05 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:22.362088705 +0000 UTC m=+136.431605037" watchObservedRunningTime="2025-12-05 06:48:22.366773015 +0000 UTC m=+136.436289347" Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.366930 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk"] Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.376889 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:48:22 crc kubenswrapper[4780]: W1205 06:48:22.386101 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod471abb3f_f9ef_454a_8f00_87c4846e59f2.slice/crio-05a913df92ca22d38af8aa9ad935b91ba0b763472bca56efde0fc2468e20d6f2 WatchSource:0}: Error finding container 05a913df92ca22d38af8aa9ad935b91ba0b763472bca56efde0fc2468e20d6f2: Status 404 returned error can't find the container with id 05a913df92ca22d38af8aa9ad935b91ba0b763472bca56efde0fc2468e20d6f2 Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.402922 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:22 crc kubenswrapper[4780]: E1205 06:48:22.403793 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:22.903776083 +0000 UTC m=+136.973292415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.504375 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:22 crc kubenswrapper[4780]: E1205 06:48:22.504768 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:23.00475324 +0000 UTC m=+137.074269572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.510293 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" podStartSLOduration=117.510276714 podStartE2EDuration="1m57.510276714s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:22.474796678 +0000 UTC m=+136.544313020" watchObservedRunningTime="2025-12-05 06:48:22.510276714 +0000 UTC m=+136.579793046" Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.553543 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-tjqz5" podStartSLOduration=117.553524446 podStartE2EDuration="1m57.553524446s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:22.513488224 +0000 UTC m=+136.583004556" watchObservedRunningTime="2025-12-05 06:48:22.553524446 +0000 UTC m=+136.623040778" Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.556397 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-sdctn" podStartSLOduration=117.556387556 podStartE2EDuration="1m57.556387556s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:22.556271593 +0000 UTC m=+136.625787945" watchObservedRunningTime="2025-12-05 06:48:22.556387556 +0000 UTC m=+136.625903888" Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.602388 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" podStartSLOduration=117.602346024 podStartE2EDuration="1m57.602346024s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:22.59539584 +0000 UTC m=+136.664912182" watchObservedRunningTime="2025-12-05 06:48:22.602346024 +0000 UTC m=+136.671862366" Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.605507 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:22 crc kubenswrapper[4780]: E1205 06:48:22.605904 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:23.105872122 +0000 UTC m=+137.175388454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.673693 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrx8h" podStartSLOduration=118.673675607 podStartE2EDuration="1m58.673675607s" podCreationTimestamp="2025-12-05 06:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:22.624636313 +0000 UTC m=+136.694152645" watchObservedRunningTime="2025-12-05 06:48:22.673675607 +0000 UTC m=+136.743191939" Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.675669 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g"] Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.687311 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-q4mtd"] Dec 05 06:48:22 crc kubenswrapper[4780]: W1205 06:48:22.697801 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb802ab76_8dbe_4ddd_8704_be862bfb7598.slice/crio-d72793f4cdb29c5faff0a4289b0e11ffab9c86b83845dde25f54aaba079efbca WatchSource:0}: Error finding container d72793f4cdb29c5faff0a4289b0e11ffab9c86b83845dde25f54aaba079efbca: Status 404 returned error can't find the container with id d72793f4cdb29c5faff0a4289b0e11ffab9c86b83845dde25f54aaba079efbca Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.700708 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp"] Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.709502 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:22 crc kubenswrapper[4780]: E1205 06:48:22.709800 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:23.209783341 +0000 UTC m=+137.279299673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.716146 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x2jff"] Dec 05 06:48:22 crc kubenswrapper[4780]: W1205 06:48:22.722644 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d76cd6b_eda9_4487_8665_8e99b372fa38.slice/crio-364b133365917bd0d7aac1b891d907fd9d38e087aa4949ffe3050b3729cd1a66 WatchSource:0}: Error finding container 364b133365917bd0d7aac1b891d907fd9d38e087aa4949ffe3050b3729cd1a66: Status 404 returned error can't find the container with id 364b133365917bd0d7aac1b891d907fd9d38e087aa4949ffe3050b3729cd1a66 Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.738913 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-94sr5"] Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.759157 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-27sld"] Dec 05 06:48:22 crc kubenswrapper[4780]: W1205 06:48:22.760251 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb663fa26_bc85_4c6c_a8ef_c9e17cf7c2c5.slice/crio-eb726e43377eb946ab688c5d81c56b0ac3a69afe9fd25470d3e585c343f188df WatchSource:0}: Error finding container eb726e43377eb946ab688c5d81c56b0ac3a69afe9fd25470d3e585c343f188df: Status 404 returned error can't find the container with id eb726e43377eb946ab688c5d81c56b0ac3a69afe9fd25470d3e585c343f188df Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.761431 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9"] Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.766135 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5xj2l"] Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.810620 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:22 crc kubenswrapper[4780]: E1205 06:48:22.810982 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:23.310969714 +0000 UTC m=+137.380486036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.866259 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk"] Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.913411 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:22 crc kubenswrapper[4780]: E1205 06:48:22.914373 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:23.414340138 +0000 UTC m=+137.483856470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:22 crc kubenswrapper[4780]: W1205 06:48:22.931376 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a485e21_6ee8_4849_92ab_9ec0e8b0aa35.slice/crio-6b0a1b9ade0a3e16b892975b6462f6da9639d7e578b5ea688d84412ed4563826 WatchSource:0}: Error finding container 6b0a1b9ade0a3e16b892975b6462f6da9639d7e578b5ea688d84412ed4563826: Status 404 returned error can't find the container with id 6b0a1b9ade0a3e16b892975b6462f6da9639d7e578b5ea688d84412ed4563826 Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.952807 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5"] Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.984687 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs"] Dec 05 06:48:22 crc kubenswrapper[4780]: I1205 06:48:22.989708 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rh794"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.001121 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.017016 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.017632 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.017996 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:23.517983949 +0000 UTC m=+137.587500281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.024246 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.030776 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.033461 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.042741 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87cmc"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.049786 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.055696 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld"] Dec 05 06:48:23 crc kubenswrapper[4780]: W1205 06:48:23.064817 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bbb474c_dfea_4d7f_802a_efa7e15d8595.slice/crio-1e7fe35cc784483f7df1f822814afb6323df00a177c3f91a6115794e1b6f03b8 WatchSource:0}: Error finding container 1e7fe35cc784483f7df1f822814afb6323df00a177c3f91a6115794e1b6f03b8: Status 404 returned error can't find the container with id 1e7fe35cc784483f7df1f822814afb6323df00a177c3f91a6115794e1b6f03b8 Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.065403 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" event={"ID":"6d76cd6b-eda9-4487-8665-8e99b372fa38","Type":"ContainerStarted","Data":"364b133365917bd0d7aac1b891d907fd9d38e087aa4949ffe3050b3729cd1a66"} Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.070195 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.071619 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" event={"ID":"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f","Type":"ContainerStarted","Data":"8af0976264b9431cf3b876f35b6267f6165575e5050aee574bf5dc89e638b30c"} Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.085174 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.086444 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g" event={"ID":"b802ab76-8dbe-4ddd-8704-be862bfb7598","Type":"ContainerStarted","Data":"d72793f4cdb29c5faff0a4289b0e11ffab9c86b83845dde25f54aaba079efbca"} Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.088250 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" event={"ID":"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5","Type":"ContainerStarted","Data":"eb726e43377eb946ab688c5d81c56b0ac3a69afe9fd25470d3e585c343f188df"} Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.093666 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-27sld" event={"ID":"e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e","Type":"ContainerStarted","Data":"5fffe003446c9aa7366301859f2e6edfa38f73b56d90312e377235848d4f18c2"} Dec 05 06:48:23 crc kubenswrapper[4780]: W1205 06:48:23.107572 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b5ca6f7_6820_4010_966e_05e4cf49ba03.slice/crio-589ab147a7ff2f73690502ed232c60292eb8adbad79b9ab1ce30dfc1bd424236 WatchSource:0}: Error finding container 589ab147a7ff2f73690502ed232c60292eb8adbad79b9ab1ce30dfc1bd424236: Status 404 returned error can't find the container with id 589ab147a7ff2f73690502ed232c60292eb8adbad79b9ab1ce30dfc1bd424236 Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.109460 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" event={"ID":"009edd4d-dcfb-4a88-a93e-dbd2430403c1","Type":"ContainerStarted","Data":"87bd4d7f795c82e0a09e17ee99ea335ec6ef1adf83478a5605630df079fe859b"} Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.126099 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.126782 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:23.626768464 +0000 UTC m=+137.696284786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.161483 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" podStartSLOduration=118.161469738 podStartE2EDuration="1m58.161469738s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:23.139249081 +0000 UTC m=+137.208765413" watchObservedRunningTime="2025-12-05 06:48:23.161469738 +0000 UTC m=+137.230986070" Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.163411 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.206753 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:23 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:23 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:23 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.206802 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.208833 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zkrh4"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.209801 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.212433 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" event={"ID":"d8ba9f40-589c-4f71-9eff-6fee943bea65","Type":"ContainerStarted","Data":"96077ce3f6edd78797a8f1d4a737b166c5297d41557789c42dc2311d43e5dbea"} Dec 05 06:48:23 crc kubenswrapper[4780]: W1205 06:48:23.214213 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72b5be1d_5e75_4797_9891_8b3de8cc6a7f.slice/crio-0292079b6f49f040d205a2e761852cfe6ef3c04e96f700578e88d06ddaff71bc WatchSource:0}: Error finding container 0292079b6f49f040d205a2e761852cfe6ef3c04e96f700578e88d06ddaff71bc: Status 404 returned error can't find the container with id 0292079b6f49f040d205a2e761852cfe6ef3c04e96f700578e88d06ddaff71bc Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.228626 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" event={"ID":"50e81137-0d77-4028-9a46-600476de40b0","Type":"ContainerStarted","Data":"8226f12f63aec370050f8e6d92a30cf7506d998f472c560a60c4068996aecb05"} Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.230683 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.231010 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:23.730995891 +0000 UTC m=+137.800512223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.233923 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mw286" event={"ID":"7861d984-72f7-44e0-8d42-fb04a7d2000e","Type":"ContainerStarted","Data":"bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582"} Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.233979 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mw286" event={"ID":"7861d984-72f7-44e0-8d42-fb04a7d2000e","Type":"ContainerStarted","Data":"48b6ae70cbeb22d56011bc4d60adb6e04f81000c29e5d59df32493d0465e0dde"} Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.239573 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" event={"ID":"471abb3f-f9ef-454a-8f00-87c4846e59f2","Type":"ContainerStarted","Data":"4f18c93b29c9d729fcb4afc637aaa59efd17f9b8de978d8a65ed770d4a53b144"} Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.239598 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" event={"ID":"471abb3f-f9ef-454a-8f00-87c4846e59f2","Type":"ContainerStarted","Data":"05a913df92ca22d38af8aa9ad935b91ba0b763472bca56efde0fc2468e20d6f2"} Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.240005 4780 patch_prober.go:28] interesting pod/downloads-7954f5f757-sdctn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.240039 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sdctn" podUID="d32f2abc-ca84-43ff-bee2-65a7cecff5d2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.244094 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wgk69"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.256094 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.256347 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-mw286" podStartSLOduration=118.256333965 podStartE2EDuration="1m58.256333965s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:23.250858383 +0000 UTC m=+137.320374715" watchObservedRunningTime="2025-12-05 06:48:23.256333965 +0000 UTC m=+137.325850297" Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.272572 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7mwk" podStartSLOduration=118.272551387 podStartE2EDuration="1m58.272551387s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:23.267244819 +0000 UTC m=+137.336761151" watchObservedRunningTime="2025-12-05 06:48:23.272551387 +0000 UTC m=+137.342067719" Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.328691 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-74bc7"] Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.331248 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.332185 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:23.832166743 +0000 UTC m=+137.901683075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.370597 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9rr45"] Dec 05 06:48:23 crc kubenswrapper[4780]: W1205 06:48:23.380235 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4df77495_3e2c_4c13_823f_f217f0dcb8f5.slice/crio-a68c18b87c7340151ac383d32a3b835c80824842163ab287ec43ca0128a4b386 WatchSource:0}: Error finding container a68c18b87c7340151ac383d32a3b835c80824842163ab287ec43ca0128a4b386: Status 404 returned error can't find the container with id a68c18b87c7340151ac383d32a3b835c80824842163ab287ec43ca0128a4b386 Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.435234 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.435584 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:23.935554008 +0000 UTC m=+138.005070340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.535814 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.535989 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.03596519 +0000 UTC m=+138.105481522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.536438 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.536737 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.03672688 +0000 UTC m=+138.106243212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.637362 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.637517 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.137492302 +0000 UTC m=+138.207008634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.637673 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.638100 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.138093549 +0000 UTC m=+138.207609881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.740387 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.740570 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.240540247 +0000 UTC m=+138.310056579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.740710 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.741106 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.241090842 +0000 UTC m=+138.310607174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.844486 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.844676 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.344645221 +0000 UTC m=+138.414161553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.844732 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.845065 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.345058593 +0000 UTC m=+138.414574925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:23 crc kubenswrapper[4780]: I1205 06:48:23.945333 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:23 crc kubenswrapper[4780]: E1205 06:48:23.945840 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.445826965 +0000 UTC m=+138.515343297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.052026 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.052438 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.552424678 +0000 UTC m=+138.621941010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.153998 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.154225 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.654193167 +0000 UTC m=+138.723709499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.154821 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.155138 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.655126593 +0000 UTC m=+138.724642925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.197124 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:24 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:24 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:24 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.197183 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.251005 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" event={"ID":"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c","Type":"ContainerStarted","Data":"6d68b06a2c179ebd909715cad2ae3645b4d4cda28a3f3d4a0a3d65bec5d4a2b5"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.255441 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.255813 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.755793802 +0000 UTC m=+138.825310134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.257638 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" event={"ID":"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde","Type":"ContainerStarted","Data":"cd9336992fef12ce46d83db217349be969066fda83bbb3668c056a301ee6361d"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.261799 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" event={"ID":"50e81137-0d77-4028-9a46-600476de40b0","Type":"ContainerStarted","Data":"b9e491ae8ed609e1ef233b0ec1fdf0fd4de26858072fa0a3d0ca5fc96253d1cc"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.272275 4780 generic.go:334] "Generic (PLEG): container finished" podID="b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5" containerID="34e081035332a1884a2f7eca5ed2f9c0c14e6f5ca4fe1ffc13e6a17eaee48b05" exitCode=0 Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.272374 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" event={"ID":"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5","Type":"ContainerDied","Data":"34e081035332a1884a2f7eca5ed2f9c0c14e6f5ca4fe1ffc13e6a17eaee48b05"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.278920 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-27sld" event={"ID":"e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e","Type":"ContainerStarted","Data":"2809dc2782cd1f6163845726979d8f99e8751f3136cb53eb265c12d9b1571e15"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.279841 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" event={"ID":"72b5be1d-5e75-4797-9891-8b3de8cc6a7f","Type":"ContainerStarted","Data":"b3a3ac5f9c0d70320f37982d5ba020bc09952e6672619d21aa85679604bd9c6c"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.279863 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" event={"ID":"72b5be1d-5e75-4797-9891-8b3de8cc6a7f","Type":"ContainerStarted","Data":"0292079b6f49f040d205a2e761852cfe6ef3c04e96f700578e88d06ddaff71bc"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.280678 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.288827 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" event={"ID":"cf0714a1-e162-4459-953a-bf44f6433301","Type":"ContainerStarted","Data":"fa66d09479488879e1be53e80e21bf8dee25a5e0757c52472d6557021d9acce4"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.289484 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-74bc7" event={"ID":"216c07e5-bfea-47f3-9b08-b8f7e6c8177d","Type":"ContainerStarted","Data":"3243571c8d683050dbb7d3c32fe6dcba0471d7f9ca9aadc44c469384416c91c9"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.290413 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" event={"ID":"ef700d08-f99c-4682-9099-ac6b8263b400","Type":"ContainerStarted","Data":"2c5bb99701a21bc1bdd3404f91e9c4e25afc0a6b76a06022656b315b052cef83"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.290436 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" event={"ID":"ef700d08-f99c-4682-9099-ac6b8263b400","Type":"ContainerStarted","Data":"bb16001f9b92ef7652130bab133a0751a520cc14707983cefd30d99360ad0e23"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.292490 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" event={"ID":"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d","Type":"ContainerStarted","Data":"83ea97022b8d10f78f70f4d0a2abad3893eeb466329fdc399f5da9205de726cb"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.301368 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9lsfp" podStartSLOduration=119.301344488 podStartE2EDuration="1m59.301344488s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.295817925 +0000 UTC m=+138.365334257" watchObservedRunningTime="2025-12-05 06:48:24.301344488 +0000 UTC m=+138.370860820" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.306716 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.307442 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" event={"ID":"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f","Type":"ContainerStarted","Data":"fb2713f784548a8c87d7b2eee7f999427c2824f848ce80d4b5c8e8b25db8af04"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.308509 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" event={"ID":"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76","Type":"ContainerStarted","Data":"7a65cb7d2ad33198696d28dd61ae60dd3e48f3f6e7816ce458e3332a618b1f17"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.308533 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" event={"ID":"20a95e2f-0993-4a2b-a5b6-5a3cc3ea1d76","Type":"ContainerStarted","Data":"af6dfe4b12fd4882f09ad11c7a8be3d1508867e84c48a05ccb6b05fcdaecd3d8"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.320168 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x2jff" event={"ID":"009edd4d-dcfb-4a88-a93e-dbd2430403c1","Type":"ContainerStarted","Data":"80745bdfcf7e66d3a9b26663a2d9a186f0b31677ead4f5f19edeaa13b5753c39"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.326769 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" event={"ID":"79984b4c-0cd9-465d-9bc4-fcf4d9a57196","Type":"ContainerStarted","Data":"ff422ea534649452fde02ce53fb2d186ccf11184e3ab5fac7d92f188685a03e7"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.328032 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" event={"ID":"6d76cd6b-eda9-4487-8665-8e99b372fa38","Type":"ContainerStarted","Data":"f63b1d22450e7674d00341988e53ba7b36fe107b8709e7165b29c76b70fb9d74"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.336988 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" event={"ID":"ef3fbf26-207a-4c4c-8296-a48fcbfa9641","Type":"ContainerStarted","Data":"d6a803495c6b2b2694b5f1f8e624a10d72cfa18a76a053f482746daee74de45c"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.337051 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" event={"ID":"ef3fbf26-207a-4c4c-8296-a48fcbfa9641","Type":"ContainerStarted","Data":"92885d8adff96bb631f1f53ca18052c70d7daef964043bcf519c315b54cf0e2d"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.338097 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.346553 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wgk69" event={"ID":"75f13827-e915-43a8-a59e-5aba80a424c1","Type":"ContainerStarted","Data":"a11d7530152ed0163930edb6fe97bbb0a1e316285c284f4aa0d30a960449a70b"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.357532 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.357897 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.358431 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.358724 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.858713303 +0000 UTC m=+138.928229635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.397152 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" event={"ID":"b4bb7d7c-826d-4157-970e-c4d195647287","Type":"ContainerStarted","Data":"769adc8ba9f6dd0f8e0e8d57893814d1348b37089fca3c3f072253b0075dfa3e"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.397200 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" event={"ID":"b4bb7d7c-826d-4157-970e-c4d195647287","Type":"ContainerStarted","Data":"f57060630beb86561393a692bc672485fc4057c70b59294877a2d9b3777a6a4f"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.397521 4780 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r2xld container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.397557 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" podUID="ef3fbf26-207a-4c4c-8296-a48fcbfa9641" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.417712 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.449348 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ggrnz" podStartSLOduration=119.449329042 podStartE2EDuration="1m59.449329042s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.448384516 +0000 UTC m=+138.517900848" watchObservedRunningTime="2025-12-05 06:48:24.449329042 +0000 UTC m=+138.518845374" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.450442 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dhv68" podStartSLOduration=119.450432693 podStartE2EDuration="1m59.450432693s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.397957384 +0000 UTC m=+138.467473706" watchObservedRunningTime="2025-12-05 06:48:24.450432693 +0000 UTC m=+138.519949025" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.459048 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" event={"ID":"4df77495-3e2c-4c13-823f-f217f0dcb8f5","Type":"ContainerStarted","Data":"a68c18b87c7340151ac383d32a3b835c80824842163ab287ec43ca0128a4b386"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.459259 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.462472 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:24.962452257 +0000 UTC m=+139.031968589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.504276 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g" event={"ID":"b802ab76-8dbe-4ddd-8704-be862bfb7598","Type":"ContainerStarted","Data":"d3e43d2e0dd4d686e498d4d4baa43d1a8924e8e61ec7ea1aa2c200b150d839a5"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.504321 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g" event={"ID":"b802ab76-8dbe-4ddd-8704-be862bfb7598","Type":"ContainerStarted","Data":"3b74b5a3a6f5f5f448a56b8b4a2fcc01e54163c34eb856d63403884fe0d39d7f"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.533436 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" event={"ID":"d8ba9f40-589c-4f71-9eff-6fee943bea65","Type":"ContainerStarted","Data":"985fb3d9038d17dc9883c02c1c829be191809277a2cf9d48bdfc05ff8b0be5bc"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.537071 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" event={"ID":"8260b9e3-bfa3-4d9a-9af8-4764100b21c0","Type":"ContainerStarted","Data":"8a4e087e58498f0a4ef23200920d836d2d7909b54a5550fd02b65536bc7414cd"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.538096 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" event={"ID":"065d59a1-845a-4a35-8f55-6e550e259a33","Type":"ContainerStarted","Data":"eb123051b37f43312a599763fac50548f5b8ab5a80bac90e8d597385e14d7967"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.538130 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" event={"ID":"065d59a1-845a-4a35-8f55-6e550e259a33","Type":"ContainerStarted","Data":"1a7b5998ae1f95bdd6f7d46d90031841f8afa589ff5fd6f940707701f754f406"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.566517 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.567995 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" event={"ID":"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35","Type":"ContainerStarted","Data":"7996c086cdfd23dc8d4595b2c5604e01363ef416c3967598c4bea18d681c8a72"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.568031 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" event={"ID":"0a485e21-6ee8-4849-92ab-9ec0e8b0aa35","Type":"ContainerStarted","Data":"6b0a1b9ade0a3e16b892975b6462f6da9639d7e578b5ea688d84412ed4563826"} Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.568719 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.068705231 +0000 UTC m=+139.138221563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.575932 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" event={"ID":"4bdee9bf-d3b6-4832-b5e8-1ba27e3cc80f","Type":"ContainerStarted","Data":"a6be2f34e875ad3bee768d10cf419c4f479615a820bf0bf960802e726e0b63de"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.601580 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" event={"ID":"5bbb474c-dfea-4d7f-802a-efa7e15d8595","Type":"ContainerStarted","Data":"ba6feaa582fd8cd535789340bdd0b78dd387da59d70e8691b9853695b3b9f15d"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.601625 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" event={"ID":"5bbb474c-dfea-4d7f-802a-efa7e15d8595","Type":"ContainerStarted","Data":"1e7fe35cc784483f7df1f822814afb6323df00a177c3f91a6115794e1b6f03b8"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.610468 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" event={"ID":"c32f1f57-3c18-4efc-be09-e9abfef22c52","Type":"ContainerStarted","Data":"c77348b2c09a62919c46c7f5d32fc0839dfcbad954ce4dcd5759b8a21fc13f08"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.610506 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" event={"ID":"c32f1f57-3c18-4efc-be09-e9abfef22c52","Type":"ContainerStarted","Data":"1e5886ea184ed047fed04c8f407f68691179bea30cc6fac7e696f00498c729fa"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.611060 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" podStartSLOduration=119.611043128 podStartE2EDuration="1m59.611043128s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.609580998 +0000 UTC m=+138.679097330" watchObservedRunningTime="2025-12-05 06:48:24.611043128 +0000 UTC m=+138.680559460" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.613825 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9rr45" event={"ID":"7be48601-40b4-49d8-a6ed-3a0a9de0b668","Type":"ContainerStarted","Data":"e7a17f3e202c5aa025ce4d4d9a3e08289423f9f658f287c03a7f52cb7303d60a"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.616290 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" event={"ID":"3b5ca6f7-6820-4010-966e-05e4cf49ba03","Type":"ContainerStarted","Data":"589ab147a7ff2f73690502ed232c60292eb8adbad79b9ab1ce30dfc1bd424236"} Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.616321 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.634652 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x6gqb" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.649180 4780 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87cmc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.649230 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" podUID="3b5ca6f7-6820-4010-966e-05e4cf49ba03" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.650068 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pgwq6" podStartSLOduration=119.650053133 podStartE2EDuration="1m59.650053133s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.649750104 +0000 UTC m=+138.719266436" watchObservedRunningTime="2025-12-05 06:48:24.650053133 +0000 UTC m=+138.719569465" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.668381 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.669449 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.169426671 +0000 UTC m=+139.238943003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.769729 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.818716 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" podStartSLOduration=119.818699561 podStartE2EDuration="1m59.818699561s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.765257605 +0000 UTC m=+138.834773947" watchObservedRunningTime="2025-12-05 06:48:24.818699561 +0000 UTC m=+138.888215893" Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.828130 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.328101763 +0000 UTC m=+139.397618095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.846122 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" podStartSLOduration=119.846104913 podStartE2EDuration="1m59.846104913s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.819618177 +0000 UTC m=+138.889134509" watchObservedRunningTime="2025-12-05 06:48:24.846104913 +0000 UTC m=+138.915621245" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.875812 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.883616 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-q4mtd" podStartSLOduration=119.883599665 podStartE2EDuration="1m59.883599665s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.855390621 +0000 UTC m=+138.924906953" watchObservedRunningTime="2025-12-05 06:48:24.883599665 +0000 UTC m=+138.953115997" Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.885810 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.385790376 +0000 UTC m=+139.455306708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.894801 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4qnzs" podStartSLOduration=119.894788386 podStartE2EDuration="1m59.894788386s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.892975046 +0000 UTC m=+138.962491378" watchObservedRunningTime="2025-12-05 06:48:24.894788386 +0000 UTC m=+138.964304718" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.919344 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-q7nnk" podStartSLOduration=119.919326329 podStartE2EDuration="1m59.919326329s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.918265109 +0000 UTC m=+138.987781441" watchObservedRunningTime="2025-12-05 06:48:24.919326329 +0000 UTC m=+138.988842661" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.970825 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" podStartSLOduration=119.97080807 podStartE2EDuration="1m59.97080807s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:24.940574979 +0000 UTC m=+139.010091311" watchObservedRunningTime="2025-12-05 06:48:24.97080807 +0000 UTC m=+139.040324402" Dec 05 06:48:24 crc kubenswrapper[4780]: I1205 06:48:24.986594 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:24 crc kubenswrapper[4780]: E1205 06:48:24.987499 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.487486264 +0000 UTC m=+139.557002596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.021256 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kdc8g" podStartSLOduration=120.021240112 podStartE2EDuration="2m0.021240112s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.019758051 +0000 UTC m=+139.089274383" watchObservedRunningTime="2025-12-05 06:48:25.021240112 +0000 UTC m=+139.090756444" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.087207 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.087699 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.587680319 +0000 UTC m=+139.657196661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.188520 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.188834 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.688822811 +0000 UTC m=+139.758339143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.201319 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:25 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:25 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:25 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.201381 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.289220 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.289362 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.789339295 +0000 UTC m=+139.858855627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.289564 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.289860 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.78985045 +0000 UTC m=+139.859366782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.390677 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.391008 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.890983111 +0000 UTC m=+139.960499443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.391190 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.391536 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.891524356 +0000 UTC m=+139.961040688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.492487 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.492800 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:25.992778391 +0000 UTC m=+140.062294713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.494612 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-trqhd" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.594243 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.594307 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.595242 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.09523034 +0000 UTC m=+140.164746672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.622369 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" event={"ID":"3b5ca6f7-6820-4010-966e-05e4cf49ba03","Type":"ContainerStarted","Data":"ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.623629 4780 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87cmc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.624344 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" podUID="3b5ca6f7-6820-4010-966e-05e4cf49ba03" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.624974 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" event={"ID":"065d59a1-845a-4a35-8f55-6e550e259a33","Type":"ContainerStarted","Data":"b3ba758306250dcb33d3f65dcbac4fb15c1c23bddd64beb15a050c78128bb785"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.626563 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" event={"ID":"cf0714a1-e162-4459-953a-bf44f6433301","Type":"ContainerStarted","Data":"b1058029e6e6d346b293145538b118d1eec78b28e1bce81ba9a6dc3c77d4a078"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.629033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" event={"ID":"4df77495-3e2c-4c13-823f-f217f0dcb8f5","Type":"ContainerStarted","Data":"18bd45256b8eac3639cc9a72c14acd3bcad279eb51beb630b9b9f52c7127849f"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.631617 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" event={"ID":"9f0f238a-6517-4eae-b2b6-26bb4d01bb4c","Type":"ContainerStarted","Data":"ff8f810a23c24242a4cb057c0839d212edbf34bf58594ddd572dc1cf45c3c673"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.632891 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" event={"ID":"6f6d802f-18a0-4ad8-90f7-d2e4f5e83bde","Type":"ContainerStarted","Data":"af9c40515b9f2a3deb5a5a4ba1a54639601a2e3b36919779354bbfce41ca932f"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.634329 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-74bc7" event={"ID":"216c07e5-bfea-47f3-9b08-b8f7e6c8177d","Type":"ContainerStarted","Data":"c52fd941bfae21de80f44e6e974c319d1dd81f5afe31eef40ba82b50dabc7d44"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.636057 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wgk69" event={"ID":"75f13827-e915-43a8-a59e-5aba80a424c1","Type":"ContainerStarted","Data":"66f30105c50a50f751f6977cd2d191cc18dca400c8d1965e0cf59278f7a1265a"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.637581 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" event={"ID":"79984b4c-0cd9-465d-9bc4-fcf4d9a57196","Type":"ContainerStarted","Data":"56b3d2ebb22909c59d19a6e364c5fff8dff68860fa78944593b406581f2fc4f0"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.637810 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" event={"ID":"79984b4c-0cd9-465d-9bc4-fcf4d9a57196","Type":"ContainerStarted","Data":"3791eb797ef080b2a490469f7c54fceaabdb246f3c07f2d300b0176dcdb47879"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.638087 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.639553 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-94sr5" event={"ID":"6d76cd6b-eda9-4487-8665-8e99b372fa38","Type":"ContainerStarted","Data":"28ad29bf1d3604bf71f0088e8f5e7a25beaa535136d44dc8bf147e8f1a945666"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.640870 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" event={"ID":"fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f","Type":"ContainerStarted","Data":"7ef61d58b22780a17d436e88a3781d5169228aad17e039578cd2e70e7bf6d69a"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.641095 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.642552 4780 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ffg6g container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.642600 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" podUID="fbbccf9e-ddb5-4f42-b8a7-aa21a16c397f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.642675 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" event={"ID":"5bbb474c-dfea-4d7f-802a-efa7e15d8595","Type":"ContainerStarted","Data":"5ffd36f49ac5841e9980afce2f2c190a7d74cc515fa3ef0c5dbcd207fa91d6cf"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.643966 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" event={"ID":"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d","Type":"ContainerStarted","Data":"b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.644112 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.646206 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pcmq9" event={"ID":"d8ba9f40-589c-4f71-9eff-6fee943bea65","Type":"ContainerStarted","Data":"bde0c5014a4be9b50c2dfdaf63d250f97e460df8b1e362152306c666d033be56"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.647923 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" event={"ID":"8260b9e3-bfa3-4d9a-9af8-4764100b21c0","Type":"ContainerStarted","Data":"be6a707d5bf712807ae271f65fe5611e82ecf931f1527de203b9e0790a6af188"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.650038 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" event={"ID":"b4bb7d7c-826d-4157-970e-c4d195647287","Type":"ContainerStarted","Data":"24eb6f251ab921d51be2d5a504dd685ae7eae4f31c8335bbb194643ec5256eb7"} Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.686474 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gfpww" podStartSLOduration=120.686456016 podStartE2EDuration="2m0.686456016s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.669211797 +0000 UTC m=+139.738728129" watchObservedRunningTime="2025-12-05 06:48:25.686456016 +0000 UTC m=+139.755972338" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.687035 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" podStartSLOduration=120.687029101 podStartE2EDuration="2m0.687029101s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.685213641 +0000 UTC m=+139.754729973" watchObservedRunningTime="2025-12-05 06:48:25.687029101 +0000 UTC m=+139.756545433" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.695845 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.695978 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.19595761 +0000 UTC m=+140.265473952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.696526 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.697737 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.197717929 +0000 UTC m=+140.267234351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.705718 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xmmsj" podStartSLOduration=120.705699641 podStartE2EDuration="2m0.705699641s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.704057635 +0000 UTC m=+139.773573967" watchObservedRunningTime="2025-12-05 06:48:25.705699641 +0000 UTC m=+139.775215973" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.734299 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" podStartSLOduration=120.734280045 podStartE2EDuration="2m0.734280045s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.730245993 +0000 UTC m=+139.799762345" watchObservedRunningTime="2025-12-05 06:48:25.734280045 +0000 UTC m=+139.803796367" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.762266 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ljf89" podStartSLOduration=120.762243403 podStartE2EDuration="2m0.762243403s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.759672061 +0000 UTC m=+139.829188393" watchObservedRunningTime="2025-12-05 06:48:25.762243403 +0000 UTC m=+139.831759745" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.783420 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" podStartSLOduration=120.783398201 podStartE2EDuration="2m0.783398201s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.782007052 +0000 UTC m=+139.851523404" watchObservedRunningTime="2025-12-05 06:48:25.783398201 +0000 UTC m=+139.852914533" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.798423 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.798617 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.298587103 +0000 UTC m=+140.368103435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.799085 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.799390 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.299374734 +0000 UTC m=+140.368891066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.847700 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z2lhl" podStartSLOduration=120.847683518 podStartE2EDuration="2m0.847683518s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.845380834 +0000 UTC m=+139.914897166" watchObservedRunningTime="2025-12-05 06:48:25.847683518 +0000 UTC m=+139.917199850" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.875308 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-74bc7" podStartSLOduration=8.875291946 podStartE2EDuration="8.875291946s" podCreationTimestamp="2025-12-05 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.87363906 +0000 UTC m=+139.943155392" watchObservedRunningTime="2025-12-05 06:48:25.875291946 +0000 UTC m=+139.944808278" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.903254 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.903460 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.403432758 +0000 UTC m=+140.472949090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.903843 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:25 crc kubenswrapper[4780]: E1205 06:48:25.904168 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.404160718 +0000 UTC m=+140.473677050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.919831 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" podStartSLOduration=120.919813873 podStartE2EDuration="2m0.919813873s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.91932583 +0000 UTC m=+139.988842162" watchObservedRunningTime="2025-12-05 06:48:25.919813873 +0000 UTC m=+139.989330205" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.951431 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rh794" podStartSLOduration=120.951415972 podStartE2EDuration="2m0.951415972s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.948014338 +0000 UTC m=+140.017530670" watchObservedRunningTime="2025-12-05 06:48:25.951415972 +0000 UTC m=+140.020932304" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.995490 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zkrh4" podStartSLOduration=120.995474527 podStartE2EDuration="2m0.995474527s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.9945162 +0000 UTC m=+140.064032532" watchObservedRunningTime="2025-12-05 06:48:25.995474527 +0000 UTC m=+140.064990859" Dec 05 06:48:25 crc kubenswrapper[4780]: I1205 06:48:25.997127 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qdkv5" podStartSLOduration=120.997121883 podStartE2EDuration="2m0.997121883s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:25.972033995 +0000 UTC m=+140.041550347" watchObservedRunningTime="2025-12-05 06:48:25.997121883 +0000 UTC m=+140.066638215" Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.004916 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.005552 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.505535907 +0000 UTC m=+140.575052239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.106895 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.107168 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.607153281 +0000 UTC m=+140.676669613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.185276 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.195503 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:26 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:26 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:26 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.195754 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.207860 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.208238 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.708223501 +0000 UTC m=+140.777739833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.309168 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.309523 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.809502607 +0000 UTC m=+140.879018999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.410773 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.411055 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.911030099 +0000 UTC m=+140.980546431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.411131 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.411502 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:26.911487933 +0000 UTC m=+140.981004255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.513249 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.513496 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.013477207 +0000 UTC m=+141.082993539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.513814 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.514124 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.014115606 +0000 UTC m=+141.083631938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.614770 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.614974 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.114949209 +0000 UTC m=+141.184465541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.650389 4780 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r2xld container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.650446 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" podUID="ef3fbf26-207a-4c4c-8296-a48fcbfa9641" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.664933 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-27sld" event={"ID":"e0e0c03d-4b19-4fb8-ae3a-1270ed49de6e","Type":"ContainerStarted","Data":"5999d2c8d0f343c6f2c8a6a73f4a801cc7b61ae126cf616c70b194f9ab5f79f0"} Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.667374 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9rr45" event={"ID":"7be48601-40b4-49d8-a6ed-3a0a9de0b668","Type":"ContainerStarted","Data":"e324df80d619302bee20422ca0099b0a3fe52cfcff4b5a7a63c8d0e08478efe7"} Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.669690 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" event={"ID":"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5","Type":"ContainerStarted","Data":"a12950e4ecacda740f174da40652976249256125e1218a4676d7a9379188d8c5"} Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.676161 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wgk69" event={"ID":"75f13827-e915-43a8-a59e-5aba80a424c1","Type":"ContainerStarted","Data":"cdfab0dce750fb8a603ef209077915b0ba474c9f3d74cc4b585a47123b696111"} Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.677029 4780 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-87cmc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.677063 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" podUID="3b5ca6f7-6820-4010-966e-05e4cf49ba03" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.688014 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ffg6g" Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.689711 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-27sld" podStartSLOduration=121.689695427 podStartE2EDuration="2m1.689695427s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:26.686019375 +0000 UTC m=+140.755535707" watchObservedRunningTime="2025-12-05 06:48:26.689695427 +0000 UTC m=+140.759211759" Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.718175 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.718260 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r2xld" Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.718597 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wgk69" podStartSLOduration=9.71857341 podStartE2EDuration="9.71857341s" podCreationTimestamp="2025-12-05 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:26.717740916 +0000 UTC m=+140.787257248" watchObservedRunningTime="2025-12-05 06:48:26.71857341 +0000 UTC m=+140.788089742" Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.718532 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.218518738 +0000 UTC m=+141.288035070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.823429 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.823711 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.323669311 +0000 UTC m=+141.393185643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.827736 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.829044 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.329028911 +0000 UTC m=+141.398545243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.940317 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.940514 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.440486369 +0000 UTC m=+141.510002701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:26 crc kubenswrapper[4780]: I1205 06:48:26.940584 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:26 crc kubenswrapper[4780]: E1205 06:48:26.940954 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.440945302 +0000 UTC m=+141.510461634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.041947 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.042263 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.542230628 +0000 UTC m=+141.611746960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.042464 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.042793 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.542781204 +0000 UTC m=+141.612297536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.144031 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.144199 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.644168172 +0000 UTC m=+141.713684514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.144510 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.144764 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.644750968 +0000 UTC m=+141.714267300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.194754 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:27 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:27 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:27 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.194821 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.245484 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.245679 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.745654343 +0000 UTC m=+141.815170675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.245843 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.246191 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.746179268 +0000 UTC m=+141.815695600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.301053 4780 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.347587 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.347720 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.84770116 +0000 UTC m=+141.917217492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.347847 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.348159 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.848149283 +0000 UTC m=+141.917665615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.449435 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.449707 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.949674035 +0000 UTC m=+142.019190367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.449917 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.450249 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:27.950234101 +0000 UTC m=+142.019750433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.550566 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.550758 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.050733364 +0000 UTC m=+142.120249696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.551191 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.551443 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.051434724 +0000 UTC m=+142.120951056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.652146 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.652242 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.152226947 +0000 UTC m=+142.221743279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.652382 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.652639 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.152633008 +0000 UTC m=+142.222149340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.682689 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" event={"ID":"b663fa26-bc85-4c6c-a8ef-c9e17cf7c2c5","Type":"ContainerStarted","Data":"d9649fff222949c276400fbe7dd894625626d9aacd7d318b21eb1b042c8277ed"} Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.684350 4780 generic.go:334] "Generic (PLEG): container finished" podID="8260b9e3-bfa3-4d9a-9af8-4764100b21c0" containerID="be6a707d5bf712807ae271f65fe5611e82ecf931f1527de203b9e0790a6af188" exitCode=0 Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.684407 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" event={"ID":"8260b9e3-bfa3-4d9a-9af8-4764100b21c0","Type":"ContainerDied","Data":"be6a707d5bf712807ae271f65fe5611e82ecf931f1527de203b9e0790a6af188"} Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.686709 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9rr45" event={"ID":"7be48601-40b4-49d8-a6ed-3a0a9de0b668","Type":"ContainerStarted","Data":"7322ef8e260fd455129432dda8e07be5fa729f87e3f078d0a5438f8129074c3a"} Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.686765 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9rr45" event={"ID":"7be48601-40b4-49d8-a6ed-3a0a9de0b668","Type":"ContainerStarted","Data":"1ec3ee2dce295bed03e4a102438039e10a770444351e2f9fc603b1e5607b5cae"} Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.686980 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.705571 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" podStartSLOduration=122.705552669 podStartE2EDuration="2m2.705552669s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:27.703562593 +0000 UTC m=+141.773078925" watchObservedRunningTime="2025-12-05 06:48:27.705552669 +0000 UTC m=+141.775068991" Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.754031 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.754224 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.254197542 +0000 UTC m=+142.323713874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.754551 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.754812 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.254799808 +0000 UTC m=+142.324316140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.818222 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ljcn2"] Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.819135 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.821737 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.834081 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljcn2"] Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.858829 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.858989 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.358963213 +0000 UTC m=+142.428479545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.859112 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.859467 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.359457638 +0000 UTC m=+142.428973960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.960119 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.960298 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.460270791 +0000 UTC m=+142.529787123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.960364 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cnck\" (UniqueName: \"kubernetes.io/projected/623f84ec-99d6-44fc-8633-bf158d5b8dda-kube-api-access-6cnck\") pod \"community-operators-ljcn2\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.960396 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-utilities\") pod \"community-operators-ljcn2\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.960433 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:27 crc kubenswrapper[4780]: I1205 06:48:27.960738 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-catalog-content\") pod \"community-operators-ljcn2\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:27 crc kubenswrapper[4780]: E1205 06:48:27.961016 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.46099821 +0000 UTC m=+142.530514562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.061802 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:28 crc kubenswrapper[4780]: E1205 06:48:28.062055 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.562026779 +0000 UTC m=+142.631543121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.062461 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-catalog-content\") pod \"community-operators-ljcn2\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.062602 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cnck\" (UniqueName: \"kubernetes.io/projected/623f84ec-99d6-44fc-8633-bf158d5b8dda-kube-api-access-6cnck\") pod \"community-operators-ljcn2\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.062685 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-utilities\") pod \"community-operators-ljcn2\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.062772 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:28 crc kubenswrapper[4780]: E1205 06:48:28.063148 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.56313163 +0000 UTC m=+142.632647962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.063279 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-catalog-content\") pod \"community-operators-ljcn2\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.063570 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-utilities\") pod \"community-operators-ljcn2\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.128206 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cnck\" (UniqueName: \"kubernetes.io/projected/623f84ec-99d6-44fc-8633-bf158d5b8dda-kube-api-access-6cnck\") pod \"community-operators-ljcn2\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.131830 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.163817 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:28 crc kubenswrapper[4780]: E1205 06:48:28.164493 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.664477958 +0000 UTC m=+142.733994290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.197054 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:28 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:28 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:28 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.197110 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.219157 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rwbtd"] Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.219968 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.231550 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rwbtd"] Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.265361 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:28 crc kubenswrapper[4780]: E1205 06:48:28.265799 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 06:48:28.765778724 +0000 UTC m=+142.835295066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hcl4v" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.279352 4780 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-05T06:48:27.301083474Z","Handler":null,"Name":""} Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.294481 4780 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.294514 4780 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.366592 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.366807 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8fhc\" (UniqueName: \"kubernetes.io/projected/70d64414-49e8-4453-9d43-c0a53ac678ad-kube-api-access-h8fhc\") pod \"community-operators-rwbtd\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.366833 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-catalog-content\") pod \"community-operators-rwbtd\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.366931 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-utilities\") pod \"community-operators-rwbtd\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.371756 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.419702 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhgg6"] Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.420748 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.425589 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.435119 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhgg6"] Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.463951 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljcn2"] Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.468523 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-utilities\") pod \"community-operators-rwbtd\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.468615 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8fhc\" (UniqueName: \"kubernetes.io/projected/70d64414-49e8-4453-9d43-c0a53ac678ad-kube-api-access-h8fhc\") pod \"community-operators-rwbtd\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.468640 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-catalog-content\") pod \"community-operators-rwbtd\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.468676 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.469494 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-utilities\") pod \"community-operators-rwbtd\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.470192 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-catalog-content\") pod \"community-operators-rwbtd\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.474202 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.474254 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.496258 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8fhc\" (UniqueName: \"kubernetes.io/projected/70d64414-49e8-4453-9d43-c0a53ac678ad-kube-api-access-h8fhc\") pod \"community-operators-rwbtd\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.505274 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hcl4v\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.544663 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.551891 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.570471 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-catalog-content\") pod \"certified-operators-mhgg6\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.570547 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-utilities\") pod \"certified-operators-mhgg6\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.570630 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j24qf\" (UniqueName: \"kubernetes.io/projected/a9b07063-6822-4f6a-ab0c-d6951daae0c3-kube-api-access-j24qf\") pod \"certified-operators-mhgg6\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.619640 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tc7qm"] Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.620539 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.642827 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tc7qm"] Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.672137 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-utilities\") pod \"certified-operators-mhgg6\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.672257 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j24qf\" (UniqueName: \"kubernetes.io/projected/a9b07063-6822-4f6a-ab0c-d6951daae0c3-kube-api-access-j24qf\") pod \"certified-operators-mhgg6\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.672291 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-catalog-content\") pod \"certified-operators-mhgg6\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.672602 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-utilities\") pod \"certified-operators-mhgg6\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.672703 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-catalog-content\") pod \"certified-operators-mhgg6\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.694402 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j24qf\" (UniqueName: \"kubernetes.io/projected/a9b07063-6822-4f6a-ab0c-d6951daae0c3-kube-api-access-j24qf\") pod \"certified-operators-mhgg6\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.697195 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljcn2" event={"ID":"623f84ec-99d6-44fc-8633-bf158d5b8dda","Type":"ContainerStarted","Data":"aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0"} Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.697243 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljcn2" event={"ID":"623f84ec-99d6-44fc-8633-bf158d5b8dda","Type":"ContainerStarted","Data":"579dbcdcdf6d8473736d8aabe28921b177a0b91d547c2eadd15cc34bbacf5539"} Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.701605 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9rr45" event={"ID":"7be48601-40b4-49d8-a6ed-3a0a9de0b668","Type":"ContainerStarted","Data":"b27238fb60822db5623ce02b8429a39df797a357077419b56a68e6aabe981de4"} Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.739791 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.759952 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9rr45" podStartSLOduration=11.759936402 podStartE2EDuration="11.759936402s" podCreationTimestamp="2025-12-05 06:48:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:28.74296046 +0000 UTC m=+142.812476792" watchObservedRunningTime="2025-12-05 06:48:28.759936402 +0000 UTC m=+142.829452734" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.760207 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hcl4v"] Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.775519 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l62vp\" (UniqueName: \"kubernetes.io/projected/7f17411e-4523-4882-ae69-60b50d8dfd43-kube-api-access-l62vp\") pod \"certified-operators-tc7qm\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.775566 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-utilities\") pod \"certified-operators-tc7qm\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.775645 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-catalog-content\") pod \"certified-operators-tc7qm\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.779209 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rwbtd"] Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.880706 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-catalog-content\") pod \"certified-operators-tc7qm\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.880816 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l62vp\" (UniqueName: \"kubernetes.io/projected/7f17411e-4523-4882-ae69-60b50d8dfd43-kube-api-access-l62vp\") pod \"certified-operators-tc7qm\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.880846 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-utilities\") pod \"certified-operators-tc7qm\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.884117 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-utilities\") pod \"certified-operators-tc7qm\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.885094 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-catalog-content\") pod \"certified-operators-tc7qm\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.892298 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.903870 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l62vp\" (UniqueName: \"kubernetes.io/projected/7f17411e-4523-4882-ae69-60b50d8dfd43-kube-api-access-l62vp\") pod \"certified-operators-tc7qm\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.937330 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhgg6"] Dec 05 06:48:28 crc kubenswrapper[4780]: I1205 06:48:28.948767 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:48:29 crc kubenswrapper[4780]: W1205 06:48:29.001336 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b07063_6822_4f6a_ab0c_d6951daae0c3.slice/crio-82e937dca8e508691b2fab96dfbcf89318cf4c8dabf46282fb7229c1a8956d66 WatchSource:0}: Error finding container 82e937dca8e508691b2fab96dfbcf89318cf4c8dabf46282fb7229c1a8956d66: Status 404 returned error can't find the container with id 82e937dca8e508691b2fab96dfbcf89318cf4c8dabf46282fb7229c1a8956d66 Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.083445 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-config-volume\") pod \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.083506 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdzwd\" (UniqueName: \"kubernetes.io/projected/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-kube-api-access-vdzwd\") pod \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.083564 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-secret-volume\") pod \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\" (UID: \"8260b9e3-bfa3-4d9a-9af8-4764100b21c0\") " Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.084496 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-config-volume" (OuterVolumeSpecName: "config-volume") pod "8260b9e3-bfa3-4d9a-9af8-4764100b21c0" (UID: "8260b9e3-bfa3-4d9a-9af8-4764100b21c0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.088153 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8260b9e3-bfa3-4d9a-9af8-4764100b21c0" (UID: "8260b9e3-bfa3-4d9a-9af8-4764100b21c0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.088220 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-kube-api-access-vdzwd" (OuterVolumeSpecName: "kube-api-access-vdzwd") pod "8260b9e3-bfa3-4d9a-9af8-4764100b21c0" (UID: "8260b9e3-bfa3-4d9a-9af8-4764100b21c0"). InnerVolumeSpecName "kube-api-access-vdzwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.185050 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.185720 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdzwd\" (UniqueName: \"kubernetes.io/projected/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-kube-api-access-vdzwd\") on node \"crc\" DevicePath \"\"" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.185731 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8260b9e3-bfa3-4d9a-9af8-4764100b21c0-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.198306 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:29 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:29 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:29 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.198350 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.337307 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tc7qm"] Dec 05 06:48:29 crc kubenswrapper[4780]: W1205 06:48:29.342685 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f17411e_4523_4882_ae69_60b50d8dfd43.slice/crio-be5252ba858e7707a9fb55ce2f9fd920687569bb57bc48f4eb2458ecd1d210c6 WatchSource:0}: Error finding container be5252ba858e7707a9fb55ce2f9fd920687569bb57bc48f4eb2458ecd1d210c6: Status 404 returned error can't find the container with id be5252ba858e7707a9fb55ce2f9fd920687569bb57bc48f4eb2458ecd1d210c6 Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.429618 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-sdctn" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.714131 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.714194 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s" event={"ID":"8260b9e3-bfa3-4d9a-9af8-4764100b21c0","Type":"ContainerDied","Data":"8a4e087e58498f0a4ef23200920d836d2d7909b54a5550fd02b65536bc7414cd"} Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.714243 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a4e087e58498f0a4ef23200920d836d2d7909b54a5550fd02b65536bc7414cd" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.716928 4780 generic.go:334] "Generic (PLEG): container finished" podID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerID="1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e" exitCode=0 Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.717038 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhgg6" event={"ID":"a9b07063-6822-4f6a-ab0c-d6951daae0c3","Type":"ContainerDied","Data":"1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e"} Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.717072 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhgg6" event={"ID":"a9b07063-6822-4f6a-ab0c-d6951daae0c3","Type":"ContainerStarted","Data":"82e937dca8e508691b2fab96dfbcf89318cf4c8dabf46282fb7229c1a8956d66"} Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.718850 4780 generic.go:334] "Generic (PLEG): container finished" podID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerID="cea7ac5c041eae31fa9eb106bd87f9861e3c10089d9e9280551c99ec9985b562" exitCode=0 Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.718991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwbtd" event={"ID":"70d64414-49e8-4453-9d43-c0a53ac678ad","Type":"ContainerDied","Data":"cea7ac5c041eae31fa9eb106bd87f9861e3c10089d9e9280551c99ec9985b562"} Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.719102 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwbtd" event={"ID":"70d64414-49e8-4453-9d43-c0a53ac678ad","Type":"ContainerStarted","Data":"35cb5285248e5443e7040ec06983fb0418b409925407ff7bcaa1ae2242d6b081"} Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.719481 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.723279 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" event={"ID":"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232","Type":"ContainerStarted","Data":"4dfe6baf39024118067fbedb8a169e08b0f0b757289a86fa99050ec557d71912"} Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.723340 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" event={"ID":"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232","Type":"ContainerStarted","Data":"88acab8b656b59881ba1a6db2fa80801b675bdf3fd62f1f22a6610e1e95ad61a"} Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.723542 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.727195 4780 generic.go:334] "Generic (PLEG): container finished" podID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerID="aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0" exitCode=0 Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.727478 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljcn2" event={"ID":"623f84ec-99d6-44fc-8633-bf158d5b8dda","Type":"ContainerDied","Data":"aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0"} Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.729700 4780 generic.go:334] "Generic (PLEG): container finished" podID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerID="fe6e22155478a18f6e0f5a7a2b1ca2e33e05c0d71c996910fdcb76e7d54b2166" exitCode=0 Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.729847 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc7qm" event={"ID":"7f17411e-4523-4882-ae69-60b50d8dfd43","Type":"ContainerDied","Data":"fe6e22155478a18f6e0f5a7a2b1ca2e33e05c0d71c996910fdcb76e7d54b2166"} Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.729891 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc7qm" event={"ID":"7f17411e-4523-4882-ae69-60b50d8dfd43","Type":"ContainerStarted","Data":"be5252ba858e7707a9fb55ce2f9fd920687569bb57bc48f4eb2458ecd1d210c6"} Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.809190 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" podStartSLOduration=124.809174602 podStartE2EDuration="2m4.809174602s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:29.802849626 +0000 UTC m=+143.872365988" watchObservedRunningTime="2025-12-05 06:48:29.809174602 +0000 UTC m=+143.878690934" Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.907566 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:48:29 crc kubenswrapper[4780]: I1205 06:48:29.907640 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.020622 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qgdk4"] Dec 05 06:48:30 crc kubenswrapper[4780]: E1205 06:48:30.020816 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8260b9e3-bfa3-4d9a-9af8-4764100b21c0" containerName="collect-profiles" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.020829 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8260b9e3-bfa3-4d9a-9af8-4764100b21c0" containerName="collect-profiles" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.021082 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8260b9e3-bfa3-4d9a-9af8-4764100b21c0" containerName="collect-profiles" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.021978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.025530 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.028632 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgdk4"] Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.102454 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-utilities\") pod \"redhat-marketplace-qgdk4\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.102558 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r5ls\" (UniqueName: \"kubernetes.io/projected/ad87a211-56cb-40ed-8d89-33f1900987d1-kube-api-access-6r5ls\") pod \"redhat-marketplace-qgdk4\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.102594 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-catalog-content\") pod \"redhat-marketplace-qgdk4\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.141630 4780 patch_prober.go:28] interesting pod/console-f9d7485db-mw286 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.141685 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mw286" podUID="7861d984-72f7-44e0-8d42-fb04a7d2000e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.146373 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.146939 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.147032 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.191727 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.194964 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:30 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:30 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:30 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.195083 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.203302 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-utilities\") pod \"redhat-marketplace-qgdk4\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.203406 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r5ls\" (UniqueName: \"kubernetes.io/projected/ad87a211-56cb-40ed-8d89-33f1900987d1-kube-api-access-6r5ls\") pod \"redhat-marketplace-qgdk4\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.203857 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-utilities\") pod \"redhat-marketplace-qgdk4\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.204051 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-catalog-content\") pod \"redhat-marketplace-qgdk4\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.206334 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-catalog-content\") pod \"redhat-marketplace-qgdk4\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.222161 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r5ls\" (UniqueName: \"kubernetes.io/projected/ad87a211-56cb-40ed-8d89-33f1900987d1-kube-api-access-6r5ls\") pod \"redhat-marketplace-qgdk4\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.342674 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.354940 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.355011 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.382838 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.423256 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b4zdl"] Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.424968 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.430085 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4zdl"] Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.509246 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vzm\" (UniqueName: \"kubernetes.io/projected/dd0145ad-560f-4192-b244-63c7c4b38748-kube-api-access-t4vzm\") pod \"redhat-marketplace-b4zdl\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.509316 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-utilities\") pod \"redhat-marketplace-b4zdl\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.509385 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-catalog-content\") pod \"redhat-marketplace-b4zdl\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.610938 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vzm\" (UniqueName: \"kubernetes.io/projected/dd0145ad-560f-4192-b244-63c7c4b38748-kube-api-access-t4vzm\") pod \"redhat-marketplace-b4zdl\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.611025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-utilities\") pod \"redhat-marketplace-b4zdl\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.611090 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-catalog-content\") pod \"redhat-marketplace-b4zdl\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.612117 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-utilities\") pod \"redhat-marketplace-b4zdl\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.612171 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-catalog-content\") pod \"redhat-marketplace-b4zdl\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.620939 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.653636 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vzm\" (UniqueName: \"kubernetes.io/projected/dd0145ad-560f-4192-b244-63c7c4b38748-kube-api-access-t4vzm\") pod \"redhat-marketplace-b4zdl\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.742673 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5xj2l" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.752864 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.770847 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.771749 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.778218 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.778534 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.808292 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.894162 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgdk4"] Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.917317 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9e87353c-6a3f-49db-9a15-9e1c0bfd028d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 06:48:30 crc kubenswrapper[4780]: I1205 06:48:30.917565 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9e87353c-6a3f-49db-9a15-9e1c0bfd028d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.018874 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9e87353c-6a3f-49db-9a15-9e1c0bfd028d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.019025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9e87353c-6a3f-49db-9a15-9e1c0bfd028d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.019148 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9e87353c-6a3f-49db-9a15-9e1c0bfd028d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.042793 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9e87353c-6a3f-49db-9a15-9e1c0bfd028d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.106526 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.143598 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4zdl"] Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.196641 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:31 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:31 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:31 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.196767 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.228456 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5b2tw"] Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.229519 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: W1205 06:48:31.234486 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd0145ad_560f_4192_b244_63c7c4b38748.slice/crio-797573d292d604bd82aed639a66e622c1906145dc8b3a33f806cc32bd3f0543d WatchSource:0}: Error finding container 797573d292d604bd82aed639a66e622c1906145dc8b3a33f806cc32bd3f0543d: Status 404 returned error can't find the container with id 797573d292d604bd82aed639a66e622c1906145dc8b3a33f806cc32bd3f0543d Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.235030 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.241470 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5b2tw"] Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.322276 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-catalog-content\") pod \"redhat-operators-5b2tw\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.322397 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-utilities\") pod \"redhat-operators-5b2tw\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.322463 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqpjl\" (UniqueName: \"kubernetes.io/projected/7dccd32c-dbd1-45fb-8743-8ebd508423ad-kube-api-access-fqpjl\") pod \"redhat-operators-5b2tw\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.423736 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-utilities\") pod \"redhat-operators-5b2tw\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.423811 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqpjl\" (UniqueName: \"kubernetes.io/projected/7dccd32c-dbd1-45fb-8743-8ebd508423ad-kube-api-access-fqpjl\") pod \"redhat-operators-5b2tw\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.423835 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-catalog-content\") pod \"redhat-operators-5b2tw\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.424207 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-catalog-content\") pod \"redhat-operators-5b2tw\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.424448 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-utilities\") pod \"redhat-operators-5b2tw\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.444679 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqpjl\" (UniqueName: \"kubernetes.io/projected/7dccd32c-dbd1-45fb-8743-8ebd508423ad-kube-api-access-fqpjl\") pod \"redhat-operators-5b2tw\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.563951 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.620312 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vvd4c"] Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.621726 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.631841 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvd4c"] Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.721514 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.728749 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-catalog-content\") pod \"redhat-operators-vvd4c\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.728803 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-utilities\") pod \"redhat-operators-vvd4c\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.728840 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9n2n\" (UniqueName: \"kubernetes.io/projected/9b591ab3-7725-4984-aa63-8057aadb595e-kube-api-access-t9n2n\") pod \"redhat-operators-vvd4c\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.762008 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4zdl" event={"ID":"dd0145ad-560f-4192-b244-63c7c4b38748","Type":"ContainerDied","Data":"72d9bd33842b9eb005a25316ff9d14faa4e730680c3296cebaad8a0e9d5c4513"} Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.761850 4780 generic.go:334] "Generic (PLEG): container finished" podID="dd0145ad-560f-4192-b244-63c7c4b38748" containerID="72d9bd33842b9eb005a25316ff9d14faa4e730680c3296cebaad8a0e9d5c4513" exitCode=0 Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.762804 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4zdl" event={"ID":"dd0145ad-560f-4192-b244-63c7c4b38748","Type":"ContainerStarted","Data":"797573d292d604bd82aed639a66e622c1906145dc8b3a33f806cc32bd3f0543d"} Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.764866 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerID="4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff" exitCode=0 Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.765016 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgdk4" event={"ID":"ad87a211-56cb-40ed-8d89-33f1900987d1","Type":"ContainerDied","Data":"4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff"} Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.765072 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgdk4" event={"ID":"ad87a211-56cb-40ed-8d89-33f1900987d1","Type":"ContainerStarted","Data":"885c17bb26f0ad4b3328e1c3da93fa7c9e403fe8687c2fdfdd20865616e683d2"} Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.832800 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-catalog-content\") pod \"redhat-operators-vvd4c\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.832873 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-utilities\") pod \"redhat-operators-vvd4c\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.832999 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9n2n\" (UniqueName: \"kubernetes.io/projected/9b591ab3-7725-4984-aa63-8057aadb595e-kube-api-access-t9n2n\") pod \"redhat-operators-vvd4c\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.833600 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-catalog-content\") pod \"redhat-operators-vvd4c\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.833649 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-utilities\") pod \"redhat-operators-vvd4c\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:31 crc kubenswrapper[4780]: I1205 06:48:31.861039 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9n2n\" (UniqueName: \"kubernetes.io/projected/9b591ab3-7725-4984-aa63-8057aadb595e-kube-api-access-t9n2n\") pod \"redhat-operators-vvd4c\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.020790 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.121603 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5b2tw"] Dec 05 06:48:32 crc kubenswrapper[4780]: W1205 06:48:32.133122 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dccd32c_dbd1_45fb_8743_8ebd508423ad.slice/crio-d2d980184575a82e722f2ac8189d6cda1da2ff5cf72098b53d2b6e492c296feb WatchSource:0}: Error finding container d2d980184575a82e722f2ac8189d6cda1da2ff5cf72098b53d2b6e492c296feb: Status 404 returned error can't find the container with id d2d980184575a82e722f2ac8189d6cda1da2ff5cf72098b53d2b6e492c296feb Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.195764 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:32 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:32 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:32 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.195988 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.292299 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvd4c"] Dec 05 06:48:32 crc kubenswrapper[4780]: W1205 06:48:32.316557 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b591ab3_7725_4984_aa63_8057aadb595e.slice/crio-0cf1e9c105231fa30e8d385a35a5718025939bb3700c44c6390779467411e4c6 WatchSource:0}: Error finding container 0cf1e9c105231fa30e8d385a35a5718025939bb3700c44c6390779467411e4c6: Status 404 returned error can't find the container with id 0cf1e9c105231fa30e8d385a35a5718025939bb3700c44c6390779467411e4c6 Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.581330 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.582012 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.584121 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.588828 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.589028 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.659304 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3aceb64c-4c3b-4ffe-8f4d-70e719223391\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.659396 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3aceb64c-4c3b-4ffe-8f4d-70e719223391\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.760995 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3aceb64c-4c3b-4ffe-8f4d-70e719223391\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.761073 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3aceb64c-4c3b-4ffe-8f4d-70e719223391\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.761806 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3aceb64c-4c3b-4ffe-8f4d-70e719223391\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.788141 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2tw" event={"ID":"7dccd32c-dbd1-45fb-8743-8ebd508423ad","Type":"ContainerStarted","Data":"b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73"} Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.788183 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2tw" event={"ID":"7dccd32c-dbd1-45fb-8743-8ebd508423ad","Type":"ContainerStarted","Data":"d2d980184575a82e722f2ac8189d6cda1da2ff5cf72098b53d2b6e492c296feb"} Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.789701 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3aceb64c-4c3b-4ffe-8f4d-70e719223391\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.790635 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9e87353c-6a3f-49db-9a15-9e1c0bfd028d","Type":"ContainerStarted","Data":"b69ac153cee0a4423545845c26bdc41ac901526fa5202086757016dbeee0c830"} Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.790699 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9e87353c-6a3f-49db-9a15-9e1c0bfd028d","Type":"ContainerStarted","Data":"5d7d56948d9b148cd40d7fcba7a0be85e73c237fc0bc574086c6e9ea1845dee7"} Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.792011 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvd4c" event={"ID":"9b591ab3-7725-4984-aa63-8057aadb595e","Type":"ContainerStarted","Data":"0cf1e9c105231fa30e8d385a35a5718025939bb3700c44c6390779467411e4c6"} Dec 05 06:48:32 crc kubenswrapper[4780]: I1205 06:48:32.912642 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.166121 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.166458 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.166488 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.166525 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.169729 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.171318 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.171827 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.171888 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.194393 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:33 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:33 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:33 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.194442 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.348860 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.357052 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.364455 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.451176 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.870111 4780 generic.go:334] "Generic (PLEG): container finished" podID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerID="b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73" exitCode=0 Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.870500 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2tw" event={"ID":"7dccd32c-dbd1-45fb-8743-8ebd508423ad","Type":"ContainerDied","Data":"b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73"} Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.877108 4780 generic.go:334] "Generic (PLEG): container finished" podID="9e87353c-6a3f-49db-9a15-9e1c0bfd028d" containerID="b69ac153cee0a4423545845c26bdc41ac901526fa5202086757016dbeee0c830" exitCode=0 Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.877240 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9e87353c-6a3f-49db-9a15-9e1c0bfd028d","Type":"ContainerDied","Data":"b69ac153cee0a4423545845c26bdc41ac901526fa5202086757016dbeee0c830"} Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.881082 4780 generic.go:334] "Generic (PLEG): container finished" podID="9b591ab3-7725-4984-aa63-8057aadb595e" containerID="ce5e9ec50468ba4e638003d20fa244e11de66451de82ed55501773129383696c" exitCode=0 Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.881705 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvd4c" event={"ID":"9b591ab3-7725-4984-aa63-8057aadb595e","Type":"ContainerDied","Data":"ce5e9ec50468ba4e638003d20fa244e11de66451de82ed55501773129383696c"} Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.889712 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"54214ff1c0b64425de69664fe6035f178fd5a36c5be327b8c6ebeeb9285a86c4"} Dec 05 06:48:33 crc kubenswrapper[4780]: I1205 06:48:33.892799 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3aceb64c-4c3b-4ffe-8f4d-70e719223391","Type":"ContainerStarted","Data":"7d22b53045dfabf5e7e2bd625bb971ff749a02779a8c390f3509f26f4abef0b6"} Dec 05 06:48:34 crc kubenswrapper[4780]: W1205 06:48:34.015860 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-6195cbcf6947e971fa1ddcbe656f105f035f7471449907bcf66b9113f854a8ee WatchSource:0}: Error finding container 6195cbcf6947e971fa1ddcbe656f105f035f7471449907bcf66b9113f854a8ee: Status 404 returned error can't find the container with id 6195cbcf6947e971fa1ddcbe656f105f035f7471449907bcf66b9113f854a8ee Dec 05 06:48:34 crc kubenswrapper[4780]: W1205 06:48:34.016697 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b019ecf5453d2e1cfeb2973db098c7838344ea8e5d03f638717d57932bc40d18 WatchSource:0}: Error finding container b019ecf5453d2e1cfeb2973db098c7838344ea8e5d03f638717d57932bc40d18: Status 404 returned error can't find the container with id b019ecf5453d2e1cfeb2973db098c7838344ea8e5d03f638717d57932bc40d18 Dec 05 06:48:34 crc kubenswrapper[4780]: I1205 06:48:34.196949 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:34 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:34 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:34 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:34 crc kubenswrapper[4780]: I1205 06:48:34.197288 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:34 crc kubenswrapper[4780]: I1205 06:48:34.922850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e6a90168244ef1eaad8f9ab1ce35659b3fb2de5307f4337938759a4649df3139"} Dec 05 06:48:34 crc kubenswrapper[4780]: I1205 06:48:34.922929 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6195cbcf6947e971fa1ddcbe656f105f035f7471449907bcf66b9113f854a8ee"} Dec 05 06:48:34 crc kubenswrapper[4780]: I1205 06:48:34.927227 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3aceb64c-4c3b-4ffe-8f4d-70e719223391","Type":"ContainerStarted","Data":"d7f10709b27daf17e6e715d70591a65520eb525377bc6a387c235e5119ebc94a"} Dec 05 06:48:34 crc kubenswrapper[4780]: I1205 06:48:34.929271 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d9cd42639e5d2c105f04d3508e86cd650229c680b8e82cf768d01503f110f07c"} Dec 05 06:48:34 crc kubenswrapper[4780]: I1205 06:48:34.932649 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e88d2b383d299e31c99040c6a3bc3aa71141214eb16ad866b050658ca218c9d6"} Dec 05 06:48:34 crc kubenswrapper[4780]: I1205 06:48:34.932693 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b019ecf5453d2e1cfeb2973db098c7838344ea8e5d03f638717d57932bc40d18"} Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.197836 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:35 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:35 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:35 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.198250 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.303976 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.321492 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.321476111 podStartE2EDuration="3.321476111s" podCreationTimestamp="2025-12-05 06:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:48:34.948471911 +0000 UTC m=+149.017988253" watchObservedRunningTime="2025-12-05 06:48:35.321476111 +0000 UTC m=+149.390992443" Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.426046 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kubelet-dir\") pod \"9e87353c-6a3f-49db-9a15-9e1c0bfd028d\" (UID: \"9e87353c-6a3f-49db-9a15-9e1c0bfd028d\") " Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.426126 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kube-api-access\") pod \"9e87353c-6a3f-49db-9a15-9e1c0bfd028d\" (UID: \"9e87353c-6a3f-49db-9a15-9e1c0bfd028d\") " Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.427168 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9e87353c-6a3f-49db-9a15-9e1c0bfd028d" (UID: "9e87353c-6a3f-49db-9a15-9e1c0bfd028d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.436393 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9e87353c-6a3f-49db-9a15-9e1c0bfd028d" (UID: "9e87353c-6a3f-49db-9a15-9e1c0bfd028d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.528172 4780 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.528209 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e87353c-6a3f-49db-9a15-9e1c0bfd028d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.675523 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wgk69" Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.945088 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.945093 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9e87353c-6a3f-49db-9a15-9e1c0bfd028d","Type":"ContainerDied","Data":"5d7d56948d9b148cd40d7fcba7a0be85e73c237fc0bc574086c6e9ea1845dee7"} Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.945296 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d7d56948d9b148cd40d7fcba7a0be85e73c237fc0bc574086c6e9ea1845dee7" Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.946860 4780 generic.go:334] "Generic (PLEG): container finished" podID="3aceb64c-4c3b-4ffe-8f4d-70e719223391" containerID="d7f10709b27daf17e6e715d70591a65520eb525377bc6a387c235e5119ebc94a" exitCode=0 Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.947862 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3aceb64c-4c3b-4ffe-8f4d-70e719223391","Type":"ContainerDied","Data":"d7f10709b27daf17e6e715d70591a65520eb525377bc6a387c235e5119ebc94a"} Dec 05 06:48:35 crc kubenswrapper[4780]: I1205 06:48:35.947908 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:48:36 crc kubenswrapper[4780]: I1205 06:48:36.194761 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:36 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:36 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:36 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:36 crc kubenswrapper[4780]: I1205 06:48:36.195047 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:37 crc kubenswrapper[4780]: I1205 06:48:37.193547 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:37 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:37 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:37 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:37 crc kubenswrapper[4780]: I1205 06:48:37.193653 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:38 crc kubenswrapper[4780]: I1205 06:48:38.192781 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:38 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:38 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:38 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:38 crc kubenswrapper[4780]: I1205 06:48:38.192974 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:39 crc kubenswrapper[4780]: I1205 06:48:39.192361 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:39 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:39 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:39 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:39 crc kubenswrapper[4780]: I1205 06:48:39.192434 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:40 crc kubenswrapper[4780]: I1205 06:48:40.139479 4780 patch_prober.go:28] interesting pod/console-f9d7485db-mw286 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 05 06:48:40 crc kubenswrapper[4780]: I1205 06:48:40.139553 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mw286" podUID="7861d984-72f7-44e0-8d42-fb04a7d2000e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 05 06:48:40 crc kubenswrapper[4780]: I1205 06:48:40.193387 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:40 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:40 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:40 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:40 crc kubenswrapper[4780]: I1205 06:48:40.193454 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:41 crc kubenswrapper[4780]: I1205 06:48:41.193587 4780 patch_prober.go:28] interesting pod/router-default-5444994796-krv7k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 06:48:41 crc kubenswrapper[4780]: [-]has-synced failed: reason withheld Dec 05 06:48:41 crc kubenswrapper[4780]: [+]process-running ok Dec 05 06:48:41 crc kubenswrapper[4780]: healthz check failed Dec 05 06:48:41 crc kubenswrapper[4780]: I1205 06:48:41.193648 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-krv7k" podUID="8fd7fc79-44f5-4c00-8897-962ce4018e34" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 06:48:41 crc kubenswrapper[4780]: I1205 06:48:41.487326 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 06:48:41 crc kubenswrapper[4780]: I1205 06:48:41.647634 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kube-api-access\") pod \"3aceb64c-4c3b-4ffe-8f4d-70e719223391\" (UID: \"3aceb64c-4c3b-4ffe-8f4d-70e719223391\") " Dec 05 06:48:41 crc kubenswrapper[4780]: I1205 06:48:41.647673 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kubelet-dir\") pod \"3aceb64c-4c3b-4ffe-8f4d-70e719223391\" (UID: \"3aceb64c-4c3b-4ffe-8f4d-70e719223391\") " Dec 05 06:48:41 crc kubenswrapper[4780]: I1205 06:48:41.647822 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3aceb64c-4c3b-4ffe-8f4d-70e719223391" (UID: "3aceb64c-4c3b-4ffe-8f4d-70e719223391"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:48:41 crc kubenswrapper[4780]: I1205 06:48:41.647930 4780 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 06:48:41 crc kubenswrapper[4780]: I1205 06:48:41.669709 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3aceb64c-4c3b-4ffe-8f4d-70e719223391" (UID: "3aceb64c-4c3b-4ffe-8f4d-70e719223391"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:48:41 crc kubenswrapper[4780]: I1205 06:48:41.750178 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3aceb64c-4c3b-4ffe-8f4d-70e719223391-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 06:48:42 crc kubenswrapper[4780]: I1205 06:48:42.052469 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3aceb64c-4c3b-4ffe-8f4d-70e719223391","Type":"ContainerDied","Data":"7d22b53045dfabf5e7e2bd625bb971ff749a02779a8c390f3509f26f4abef0b6"} Dec 05 06:48:42 crc kubenswrapper[4780]: I1205 06:48:42.052520 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d22b53045dfabf5e7e2bd625bb971ff749a02779a8c390f3509f26f4abef0b6" Dec 05 06:48:42 crc kubenswrapper[4780]: I1205 06:48:42.052566 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 06:48:42 crc kubenswrapper[4780]: I1205 06:48:42.193740 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:42 crc kubenswrapper[4780]: I1205 06:48:42.195818 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-krv7k" Dec 05 06:48:47 crc kubenswrapper[4780]: I1205 06:48:47.349993 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:47 crc kubenswrapper[4780]: I1205 06:48:47.355113 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c29a8f3d-4c29-4bfe-a8ab-6d28970106be-metrics-certs\") pod \"network-metrics-daemon-zkjck\" (UID: \"c29a8f3d-4c29-4bfe-a8ab-6d28970106be\") " pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:47 crc kubenswrapper[4780]: I1205 06:48:47.451990 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-zkjck" Dec 05 06:48:48 crc kubenswrapper[4780]: I1205 06:48:48.560185 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:48:50 crc kubenswrapper[4780]: I1205 06:48:50.289206 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:50 crc kubenswrapper[4780]: I1205 06:48:50.292521 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:48:54 crc kubenswrapper[4780]: E1205 06:48:54.611802 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 05 06:48:54 crc kubenswrapper[4780]: E1205 06:48:54.612329 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4vzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-b4zdl_openshift-marketplace(dd0145ad-560f-4192-b244-63c7c4b38748): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 06:48:54 crc kubenswrapper[4780]: E1205 06:48:54.614568 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-b4zdl" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" Dec 05 06:48:55 crc kubenswrapper[4780]: E1205 06:48:55.023587 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 05 06:48:55 crc kubenswrapper[4780]: E1205 06:48:55.023856 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6r5ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qgdk4_openshift-marketplace(ad87a211-56cb-40ed-8d89-33f1900987d1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 06:48:55 crc kubenswrapper[4780]: E1205 06:48:55.025236 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qgdk4" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" Dec 05 06:48:56 crc kubenswrapper[4780]: E1205 06:48:56.152434 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 05 06:48:56 crc kubenswrapper[4780]: E1205 06:48:56.152556 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8fhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rwbtd_openshift-marketplace(70d64414-49e8-4453-9d43-c0a53ac678ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 06:48:56 crc kubenswrapper[4780]: E1205 06:48:56.153745 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rwbtd" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.422446 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rwbtd" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.422473 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qgdk4" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.422560 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-b4zdl" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.508977 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.509267 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6cnck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ljcn2_openshift-marketplace(623f84ec-99d6-44fc-8633-bf158d5b8dda): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.510545 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ljcn2" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.606044 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.606619 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l62vp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tc7qm_openshift-marketplace(7f17411e-4523-4882-ae69-60b50d8dfd43): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.608050 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tc7qm" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.723157 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.723299 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j24qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mhgg6_openshift-marketplace(a9b07063-6822-4f6a-ab0c-d6951daae0c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 06:48:58 crc kubenswrapper[4780]: E1205 06:48:58.724489 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mhgg6" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" Dec 05 06:48:59 crc kubenswrapper[4780]: I1205 06:48:59.908318 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:48:59 crc kubenswrapper[4780]: I1205 06:48:59.908580 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:49:00 crc kubenswrapper[4780]: I1205 06:49:00.867520 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dllgr" Dec 05 06:49:01 crc kubenswrapper[4780]: E1205 06:49:01.176504 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tc7qm" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" Dec 05 06:49:01 crc kubenswrapper[4780]: E1205 06:49:01.176557 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mhgg6" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" Dec 05 06:49:01 crc kubenswrapper[4780]: E1205 06:49:01.176616 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ljcn2" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" Dec 05 06:49:01 crc kubenswrapper[4780]: E1205 06:49:01.226626 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 05 06:49:01 crc kubenswrapper[4780]: E1205 06:49:01.226765 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9n2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vvd4c_openshift-marketplace(9b591ab3-7725-4984-aa63-8057aadb595e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 06:49:01 crc kubenswrapper[4780]: E1205 06:49:01.229012 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vvd4c" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" Dec 05 06:49:01 crc kubenswrapper[4780]: E1205 06:49:01.233517 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 05 06:49:01 crc kubenswrapper[4780]: E1205 06:49:01.233639 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqpjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5b2tw_openshift-marketplace(7dccd32c-dbd1-45fb-8743-8ebd508423ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 06:49:01 crc kubenswrapper[4780]: E1205 06:49:01.234963 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5b2tw" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" Dec 05 06:49:01 crc kubenswrapper[4780]: I1205 06:49:01.355529 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-zkjck"] Dec 05 06:49:02 crc kubenswrapper[4780]: I1205 06:49:02.221228 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zkjck" event={"ID":"c29a8f3d-4c29-4bfe-a8ab-6d28970106be","Type":"ContainerStarted","Data":"98790628f50e80397695561d23856c8d92fd5961f22d130d1fbdcb2d39ad0a3b"} Dec 05 06:49:02 crc kubenswrapper[4780]: I1205 06:49:02.221585 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zkjck" event={"ID":"c29a8f3d-4c29-4bfe-a8ab-6d28970106be","Type":"ContainerStarted","Data":"0b17a3bd771e46869463f15168a2c439bfd03b92a7564b42fb4bbdde7fa23e5d"} Dec 05 06:49:02 crc kubenswrapper[4780]: I1205 06:49:02.221602 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-zkjck" event={"ID":"c29a8f3d-4c29-4bfe-a8ab-6d28970106be","Type":"ContainerStarted","Data":"ea27e8de43d538134d88683884f474d2ee53df508818f392fc5e0301275668bd"} Dec 05 06:49:02 crc kubenswrapper[4780]: E1205 06:49:02.222856 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vvd4c" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" Dec 05 06:49:02 crc kubenswrapper[4780]: E1205 06:49:02.223141 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5b2tw" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" Dec 05 06:49:02 crc kubenswrapper[4780]: I1205 06:49:02.257413 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-zkjck" podStartSLOduration=157.257397454 podStartE2EDuration="2m37.257397454s" podCreationTimestamp="2025-12-05 06:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:49:02.256872769 +0000 UTC m=+176.326389111" watchObservedRunningTime="2025-12-05 06:49:02.257397454 +0000 UTC m=+176.326913786" Dec 05 06:49:06 crc kubenswrapper[4780]: I1205 06:49:06.973144 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 06:49:06 crc kubenswrapper[4780]: E1205 06:49:06.973908 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e87353c-6a3f-49db-9a15-9e1c0bfd028d" containerName="pruner" Dec 05 06:49:06 crc kubenswrapper[4780]: I1205 06:49:06.973930 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e87353c-6a3f-49db-9a15-9e1c0bfd028d" containerName="pruner" Dec 05 06:49:06 crc kubenswrapper[4780]: E1205 06:49:06.973956 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aceb64c-4c3b-4ffe-8f4d-70e719223391" containerName="pruner" Dec 05 06:49:06 crc kubenswrapper[4780]: I1205 06:49:06.973965 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aceb64c-4c3b-4ffe-8f4d-70e719223391" containerName="pruner" Dec 05 06:49:06 crc kubenswrapper[4780]: I1205 06:49:06.974090 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aceb64c-4c3b-4ffe-8f4d-70e719223391" containerName="pruner" Dec 05 06:49:06 crc kubenswrapper[4780]: I1205 06:49:06.974103 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e87353c-6a3f-49db-9a15-9e1c0bfd028d" containerName="pruner" Dec 05 06:49:06 crc kubenswrapper[4780]: I1205 06:49:06.974707 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:06.979913 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:07.012688 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:07.013153 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f53a3f3b-36f7-48ac-9ca3-de53624f7af9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:07.013335 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f53a3f3b-36f7-48ac-9ca3-de53624f7af9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:07.014458 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:07.115784 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f53a3f3b-36f7-48ac-9ca3-de53624f7af9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:07.115902 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f53a3f3b-36f7-48ac-9ca3-de53624f7af9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:07.115995 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f53a3f3b-36f7-48ac-9ca3-de53624f7af9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:07.133192 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f53a3f3b-36f7-48ac-9ca3-de53624f7af9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:07.323699 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 06:49:07 crc kubenswrapper[4780]: I1205 06:49:07.532493 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 06:49:08 crc kubenswrapper[4780]: I1205 06:49:08.247344 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f53a3f3b-36f7-48ac-9ca3-de53624f7af9","Type":"ContainerStarted","Data":"f285855a7933a0f804b82a97cfa2ce01647d851633a8908e221c101fdd893e74"} Dec 05 06:49:08 crc kubenswrapper[4780]: I1205 06:49:08.247700 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f53a3f3b-36f7-48ac-9ca3-de53624f7af9","Type":"ContainerStarted","Data":"4ec6dfa3971f099c3867d3d423691cc81c22961a76a8be33ad88afbdfa33dbc9"} Dec 05 06:49:08 crc kubenswrapper[4780]: I1205 06:49:08.261639 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.261619397 podStartE2EDuration="2.261619397s" podCreationTimestamp="2025-12-05 06:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:49:08.261120504 +0000 UTC m=+182.330636836" watchObservedRunningTime="2025-12-05 06:49:08.261619397 +0000 UTC m=+182.331135729" Dec 05 06:49:09 crc kubenswrapper[4780]: I1205 06:49:09.252994 4780 generic.go:334] "Generic (PLEG): container finished" podID="f53a3f3b-36f7-48ac-9ca3-de53624f7af9" containerID="f285855a7933a0f804b82a97cfa2ce01647d851633a8908e221c101fdd893e74" exitCode=0 Dec 05 06:49:09 crc kubenswrapper[4780]: I1205 06:49:09.253034 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f53a3f3b-36f7-48ac-9ca3-de53624f7af9","Type":"ContainerDied","Data":"f285855a7933a0f804b82a97cfa2ce01647d851633a8908e221c101fdd893e74"} Dec 05 06:49:10 crc kubenswrapper[4780]: I1205 06:49:10.494977 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 06:49:10 crc kubenswrapper[4780]: I1205 06:49:10.560422 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kube-api-access\") pod \"f53a3f3b-36f7-48ac-9ca3-de53624f7af9\" (UID: \"f53a3f3b-36f7-48ac-9ca3-de53624f7af9\") " Dec 05 06:49:10 crc kubenswrapper[4780]: I1205 06:49:10.560587 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kubelet-dir\") pod \"f53a3f3b-36f7-48ac-9ca3-de53624f7af9\" (UID: \"f53a3f3b-36f7-48ac-9ca3-de53624f7af9\") " Dec 05 06:49:10 crc kubenswrapper[4780]: I1205 06:49:10.560637 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f53a3f3b-36f7-48ac-9ca3-de53624f7af9" (UID: "f53a3f3b-36f7-48ac-9ca3-de53624f7af9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:49:10 crc kubenswrapper[4780]: I1205 06:49:10.561006 4780 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:10 crc kubenswrapper[4780]: I1205 06:49:10.567101 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f53a3f3b-36f7-48ac-9ca3-de53624f7af9" (UID: "f53a3f3b-36f7-48ac-9ca3-de53624f7af9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:49:10 crc kubenswrapper[4780]: I1205 06:49:10.662408 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f53a3f3b-36f7-48ac-9ca3-de53624f7af9-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.263411 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f53a3f3b-36f7-48ac-9ca3-de53624f7af9","Type":"ContainerDied","Data":"4ec6dfa3971f099c3867d3d423691cc81c22961a76a8be33ad88afbdfa33dbc9"} Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.263464 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.263472 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ec6dfa3971f099c3867d3d423691cc81c22961a76a8be33ad88afbdfa33dbc9" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.766638 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 06:49:11 crc kubenswrapper[4780]: E1205 06:49:11.766957 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53a3f3b-36f7-48ac-9ca3-de53624f7af9" containerName="pruner" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.766975 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53a3f3b-36f7-48ac-9ca3-de53624f7af9" containerName="pruner" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.767094 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f53a3f3b-36f7-48ac-9ca3-de53624f7af9" containerName="pruner" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.767441 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.777836 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-var-lock\") pod \"installer-9-crc\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.777945 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kube-api-access\") pod \"installer-9-crc\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.778028 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.778777 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.782167 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.783841 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.879581 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.879697 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-var-lock\") pod \"installer-9-crc\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.879725 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kube-api-access\") pod \"installer-9-crc\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.880173 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.880219 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-var-lock\") pod \"installer-9-crc\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:11 crc kubenswrapper[4780]: I1205 06:49:11.897629 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kube-api-access\") pod \"installer-9-crc\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:12 crc kubenswrapper[4780]: I1205 06:49:12.091033 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:12 crc kubenswrapper[4780]: I1205 06:49:12.279158 4780 generic.go:334] "Generic (PLEG): container finished" podID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerID="e810214e807aecdf78a33f26a7d4eb0c8df8bbdc8cdf29138367e1aa9580a678" exitCode=0 Dec 05 06:49:12 crc kubenswrapper[4780]: I1205 06:49:12.279208 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwbtd" event={"ID":"70d64414-49e8-4453-9d43-c0a53ac678ad","Type":"ContainerDied","Data":"e810214e807aecdf78a33f26a7d4eb0c8df8bbdc8cdf29138367e1aa9580a678"} Dec 05 06:49:12 crc kubenswrapper[4780]: I1205 06:49:12.350513 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 06:49:13 crc kubenswrapper[4780]: I1205 06:49:13.286917 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwbtd" event={"ID":"70d64414-49e8-4453-9d43-c0a53ac678ad","Type":"ContainerStarted","Data":"902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d"} Dec 05 06:49:13 crc kubenswrapper[4780]: I1205 06:49:13.289130 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73c1f59b-ba4a-4d44-9be6-6166087ab9e3","Type":"ContainerStarted","Data":"d01d2fd2a5a7965a1626c65b8b4e9f0589aa62940731275edc4a4abd4154bd97"} Dec 05 06:49:13 crc kubenswrapper[4780]: I1205 06:49:13.289173 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73c1f59b-ba4a-4d44-9be6-6166087ab9e3","Type":"ContainerStarted","Data":"04afc27d87375445c7d22d2e9c33b3349adb78cf4c03e3ea9e8ea4fd15e95964"} Dec 05 06:49:13 crc kubenswrapper[4780]: I1205 06:49:13.290930 4780 generic.go:334] "Generic (PLEG): container finished" podID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerID="ca79e1b967fda9538ac8c7616c1519d92efdda231cbec9886928b3da05028f9a" exitCode=0 Dec 05 06:49:13 crc kubenswrapper[4780]: I1205 06:49:13.290988 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc7qm" event={"ID":"7f17411e-4523-4882-ae69-60b50d8dfd43","Type":"ContainerDied","Data":"ca79e1b967fda9538ac8c7616c1519d92efdda231cbec9886928b3da05028f9a"} Dec 05 06:49:13 crc kubenswrapper[4780]: I1205 06:49:13.295130 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerID="951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea" exitCode=0 Dec 05 06:49:13 crc kubenswrapper[4780]: I1205 06:49:13.295181 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgdk4" event={"ID":"ad87a211-56cb-40ed-8d89-33f1900987d1","Type":"ContainerDied","Data":"951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea"} Dec 05 06:49:13 crc kubenswrapper[4780]: I1205 06:49:13.309169 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rwbtd" podStartSLOduration=2.353111045 podStartE2EDuration="45.309149547s" podCreationTimestamp="2025-12-05 06:48:28 +0000 UTC" firstStartedPulling="2025-12-05 06:48:29.724138868 +0000 UTC m=+143.793655200" lastFinishedPulling="2025-12-05 06:49:12.68017737 +0000 UTC m=+186.749693702" observedRunningTime="2025-12-05 06:49:13.307206023 +0000 UTC m=+187.376722355" watchObservedRunningTime="2025-12-05 06:49:13.309149547 +0000 UTC m=+187.378665879" Dec 05 06:49:13 crc kubenswrapper[4780]: I1205 06:49:13.359905 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.359860587 podStartE2EDuration="2.359860587s" podCreationTimestamp="2025-12-05 06:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:49:13.354325533 +0000 UTC m=+187.423841865" watchObservedRunningTime="2025-12-05 06:49:13.359860587 +0000 UTC m=+187.429376919" Dec 05 06:49:13 crc kubenswrapper[4780]: I1205 06:49:13.457419 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 06:49:14 crc kubenswrapper[4780]: I1205 06:49:14.306561 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4zdl" event={"ID":"dd0145ad-560f-4192-b244-63c7c4b38748","Type":"ContainerStarted","Data":"a178fa17eec9e422b8fcf3daa72eff9cfdf155d87c5303682967f9645adebc1c"} Dec 05 06:49:14 crc kubenswrapper[4780]: I1205 06:49:14.313169 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgdk4" event={"ID":"ad87a211-56cb-40ed-8d89-33f1900987d1","Type":"ContainerStarted","Data":"1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98"} Dec 05 06:49:14 crc kubenswrapper[4780]: I1205 06:49:14.344455 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qgdk4" podStartSLOduration=2.024247173 podStartE2EDuration="44.344438739s" podCreationTimestamp="2025-12-05 06:48:30 +0000 UTC" firstStartedPulling="2025-12-05 06:48:31.785205528 +0000 UTC m=+145.854721860" lastFinishedPulling="2025-12-05 06:49:14.105397084 +0000 UTC m=+188.174913426" observedRunningTime="2025-12-05 06:49:14.342685781 +0000 UTC m=+188.412202113" watchObservedRunningTime="2025-12-05 06:49:14.344438739 +0000 UTC m=+188.413955071" Dec 05 06:49:15 crc kubenswrapper[4780]: I1205 06:49:15.320448 4780 generic.go:334] "Generic (PLEG): container finished" podID="dd0145ad-560f-4192-b244-63c7c4b38748" containerID="a178fa17eec9e422b8fcf3daa72eff9cfdf155d87c5303682967f9645adebc1c" exitCode=0 Dec 05 06:49:15 crc kubenswrapper[4780]: I1205 06:49:15.320538 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4zdl" event={"ID":"dd0145ad-560f-4192-b244-63c7c4b38748","Type":"ContainerDied","Data":"a178fa17eec9e422b8fcf3daa72eff9cfdf155d87c5303682967f9645adebc1c"} Dec 05 06:49:15 crc kubenswrapper[4780]: I1205 06:49:15.322741 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc7qm" event={"ID":"7f17411e-4523-4882-ae69-60b50d8dfd43","Type":"ContainerStarted","Data":"1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece"} Dec 05 06:49:15 crc kubenswrapper[4780]: I1205 06:49:15.359241 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tc7qm" podStartSLOduration=2.878866402 podStartE2EDuration="47.359219692s" podCreationTimestamp="2025-12-05 06:48:28 +0000 UTC" firstStartedPulling="2025-12-05 06:48:29.733863609 +0000 UTC m=+143.803379941" lastFinishedPulling="2025-12-05 06:49:14.214216899 +0000 UTC m=+188.283733231" observedRunningTime="2025-12-05 06:49:15.356557308 +0000 UTC m=+189.426073630" watchObservedRunningTime="2025-12-05 06:49:15.359219692 +0000 UTC m=+189.428736024" Dec 05 06:49:18 crc kubenswrapper[4780]: I1205 06:49:18.545403 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:49:18 crc kubenswrapper[4780]: I1205 06:49:18.546281 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:49:18 crc kubenswrapper[4780]: I1205 06:49:18.815306 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:49:18 crc kubenswrapper[4780]: I1205 06:49:18.949915 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:49:18 crc kubenswrapper[4780]: I1205 06:49:18.949969 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:49:18 crc kubenswrapper[4780]: I1205 06:49:18.983593 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:49:19 crc kubenswrapper[4780]: I1205 06:49:19.387358 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:49:19 crc kubenswrapper[4780]: I1205 06:49:19.456316 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:49:20 crc kubenswrapper[4780]: I1205 06:49:20.343321 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:49:20 crc kubenswrapper[4780]: I1205 06:49:20.343371 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:49:20 crc kubenswrapper[4780]: I1205 06:49:20.391771 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:49:20 crc kubenswrapper[4780]: I1205 06:49:20.431154 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:49:21 crc kubenswrapper[4780]: I1205 06:49:21.378936 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tc7qm"] Dec 05 06:49:21 crc kubenswrapper[4780]: I1205 06:49:21.379424 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tc7qm" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerName="registry-server" containerID="cri-o://1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece" gracePeriod=2 Dec 05 06:49:23 crc kubenswrapper[4780]: I1205 06:49:23.575215 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rwbtd"] Dec 05 06:49:23 crc kubenswrapper[4780]: I1205 06:49:23.575950 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rwbtd" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerName="registry-server" containerID="cri-o://902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d" gracePeriod=2 Dec 05 06:49:25 crc kubenswrapper[4780]: I1205 06:49:25.386714 4780 generic.go:334] "Generic (PLEG): container finished" podID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerID="902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d" exitCode=0 Dec 05 06:49:25 crc kubenswrapper[4780]: I1205 06:49:25.386793 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwbtd" event={"ID":"70d64414-49e8-4453-9d43-c0a53ac678ad","Type":"ContainerDied","Data":"902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d"} Dec 05 06:49:25 crc kubenswrapper[4780]: I1205 06:49:25.388292 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tc7qm_7f17411e-4523-4882-ae69-60b50d8dfd43/registry-server/0.log" Dec 05 06:49:25 crc kubenswrapper[4780]: I1205 06:49:25.388956 4780 generic.go:334] "Generic (PLEG): container finished" podID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerID="1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece" exitCode=137 Dec 05 06:49:25 crc kubenswrapper[4780]: I1205 06:49:25.389001 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc7qm" event={"ID":"7f17411e-4523-4882-ae69-60b50d8dfd43","Type":"ContainerDied","Data":"1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece"} Dec 05 06:49:28 crc kubenswrapper[4780]: E1205 06:49:28.546098 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d is running failed: container process not found" containerID="902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 06:49:28 crc kubenswrapper[4780]: E1205 06:49:28.546777 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d is running failed: container process not found" containerID="902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 06:49:28 crc kubenswrapper[4780]: E1205 06:49:28.547091 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d is running failed: container process not found" containerID="902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 06:49:28 crc kubenswrapper[4780]: E1205 06:49:28.547130 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-rwbtd" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerName="registry-server" Dec 05 06:49:28 crc kubenswrapper[4780]: E1205 06:49:28.950386 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece is running failed: container process not found" containerID="1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 06:49:28 crc kubenswrapper[4780]: E1205 06:49:28.950691 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece is running failed: container process not found" containerID="1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 06:49:28 crc kubenswrapper[4780]: E1205 06:49:28.950922 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece is running failed: container process not found" containerID="1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 06:49:28 crc kubenswrapper[4780]: E1205 06:49:28.951021 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-tc7qm" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerName="registry-server" Dec 05 06:49:29 crc kubenswrapper[4780]: I1205 06:49:29.908259 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:49:29 crc kubenswrapper[4780]: I1205 06:49:29.908629 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:49:29 crc kubenswrapper[4780]: I1205 06:49:29.908682 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:49:29 crc kubenswrapper[4780]: I1205 06:49:29.909687 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 06:49:29 crc kubenswrapper[4780]: I1205 06:49:29.909942 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912" gracePeriod=600 Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.080274 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tc7qm_7f17411e-4523-4882-ae69-60b50d8dfd43/registry-server/0.log" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.081247 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.086732 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.136356 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-catalog-content\") pod \"70d64414-49e8-4453-9d43-c0a53ac678ad\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.184722 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70d64414-49e8-4453-9d43-c0a53ac678ad" (UID: "70d64414-49e8-4453-9d43-c0a53ac678ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.236923 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-utilities\") pod \"70d64414-49e8-4453-9d43-c0a53ac678ad\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.236977 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-catalog-content\") pod \"7f17411e-4523-4882-ae69-60b50d8dfd43\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.236995 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-utilities\") pod \"7f17411e-4523-4882-ae69-60b50d8dfd43\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.237324 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8fhc\" (UniqueName: \"kubernetes.io/projected/70d64414-49e8-4453-9d43-c0a53ac678ad-kube-api-access-h8fhc\") pod \"70d64414-49e8-4453-9d43-c0a53ac678ad\" (UID: \"70d64414-49e8-4453-9d43-c0a53ac678ad\") " Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.237368 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l62vp\" (UniqueName: \"kubernetes.io/projected/7f17411e-4523-4882-ae69-60b50d8dfd43-kube-api-access-l62vp\") pod \"7f17411e-4523-4882-ae69-60b50d8dfd43\" (UID: \"7f17411e-4523-4882-ae69-60b50d8dfd43\") " Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.237599 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-utilities" (OuterVolumeSpecName: "utilities") pod "7f17411e-4523-4882-ae69-60b50d8dfd43" (UID: "7f17411e-4523-4882-ae69-60b50d8dfd43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.237625 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.238279 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-utilities" (OuterVolumeSpecName: "utilities") pod "70d64414-49e8-4453-9d43-c0a53ac678ad" (UID: "70d64414-49e8-4453-9d43-c0a53ac678ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.248664 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d64414-49e8-4453-9d43-c0a53ac678ad-kube-api-access-h8fhc" (OuterVolumeSpecName: "kube-api-access-h8fhc") pod "70d64414-49e8-4453-9d43-c0a53ac678ad" (UID: "70d64414-49e8-4453-9d43-c0a53ac678ad"). InnerVolumeSpecName "kube-api-access-h8fhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.248839 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f17411e-4523-4882-ae69-60b50d8dfd43-kube-api-access-l62vp" (OuterVolumeSpecName: "kube-api-access-l62vp") pod "7f17411e-4523-4882-ae69-60b50d8dfd43" (UID: "7f17411e-4523-4882-ae69-60b50d8dfd43"). InnerVolumeSpecName "kube-api-access-l62vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.283817 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f17411e-4523-4882-ae69-60b50d8dfd43" (UID: "7f17411e-4523-4882-ae69-60b50d8dfd43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.338516 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70d64414-49e8-4453-9d43-c0a53ac678ad-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.338550 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.338568 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f17411e-4523-4882-ae69-60b50d8dfd43-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.338580 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8fhc\" (UniqueName: \"kubernetes.io/projected/70d64414-49e8-4453-9d43-c0a53ac678ad-kube-api-access-h8fhc\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.338595 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l62vp\" (UniqueName: \"kubernetes.io/projected/7f17411e-4523-4882-ae69-60b50d8dfd43-kube-api-access-l62vp\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.418763 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912" exitCode=0 Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.418900 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912"} Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.421666 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tc7qm_7f17411e-4523-4882-ae69-60b50d8dfd43/registry-server/0.log" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.422459 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tc7qm" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.422424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tc7qm" event={"ID":"7f17411e-4523-4882-ae69-60b50d8dfd43","Type":"ContainerDied","Data":"be5252ba858e7707a9fb55ce2f9fd920687569bb57bc48f4eb2458ecd1d210c6"} Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.422605 4780 scope.go:117] "RemoveContainer" containerID="1e3c552644d43ec5fae75af77cab1f09b001df78abc923b746cb83be0c0daece" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.424666 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwbtd" event={"ID":"70d64414-49e8-4453-9d43-c0a53ac678ad","Type":"ContainerDied","Data":"35cb5285248e5443e7040ec06983fb0418b409925407ff7bcaa1ae2242d6b081"} Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.424808 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwbtd" Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.457616 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rwbtd"] Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.460578 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rwbtd"] Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.477121 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tc7qm"] Dec 05 06:49:31 crc kubenswrapper[4780]: I1205 06:49:31.477170 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tc7qm"] Dec 05 06:49:32 crc kubenswrapper[4780]: I1205 06:49:32.015491 4780 scope.go:117] "RemoveContainer" containerID="ca79e1b967fda9538ac8c7616c1519d92efdda231cbec9886928b3da05028f9a" Dec 05 06:49:32 crc kubenswrapper[4780]: I1205 06:49:32.146671 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" path="/var/lib/kubelet/pods/70d64414-49e8-4453-9d43-c0a53ac678ad/volumes" Dec 05 06:49:32 crc kubenswrapper[4780]: I1205 06:49:32.167723 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" path="/var/lib/kubelet/pods/7f17411e-4523-4882-ae69-60b50d8dfd43/volumes" Dec 05 06:49:32 crc kubenswrapper[4780]: I1205 06:49:32.241345 4780 scope.go:117] "RemoveContainer" containerID="fe6e22155478a18f6e0f5a7a2b1ca2e33e05c0d71c996910fdcb76e7d54b2166" Dec 05 06:49:32 crc kubenswrapper[4780]: I1205 06:49:32.848519 4780 scope.go:117] "RemoveContainer" containerID="902e635a3aa68308b24f4b860200c3c81ed79ce88af55f23987d4ef01159cc1d" Dec 05 06:49:32 crc kubenswrapper[4780]: I1205 06:49:32.879264 4780 scope.go:117] "RemoveContainer" containerID="e810214e807aecdf78a33f26a7d4eb0c8df8bbdc8cdf29138367e1aa9580a678" Dec 05 06:49:32 crc kubenswrapper[4780]: I1205 06:49:32.972213 4780 scope.go:117] "RemoveContainer" containerID="cea7ac5c041eae31fa9eb106bd87f9861e3c10089d9e9280551c99ec9985b562" Dec 05 06:49:33 crc kubenswrapper[4780]: I1205 06:49:33.440685 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2tw" event={"ID":"7dccd32c-dbd1-45fb-8743-8ebd508423ad","Type":"ContainerStarted","Data":"a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4"} Dec 05 06:49:33 crc kubenswrapper[4780]: I1205 06:49:33.444090 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvd4c" event={"ID":"9b591ab3-7725-4984-aa63-8057aadb595e","Type":"ContainerStarted","Data":"d4b799c36ff85449b1a3f5eec29cf6402dc7b473b2656a071b55172c29c48cd7"} Dec 05 06:49:33 crc kubenswrapper[4780]: I1205 06:49:33.446871 4780 generic.go:334] "Generic (PLEG): container finished" podID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerID="45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4" exitCode=0 Dec 05 06:49:33 crc kubenswrapper[4780]: I1205 06:49:33.446930 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhgg6" event={"ID":"a9b07063-6822-4f6a-ab0c-d6951daae0c3","Type":"ContainerDied","Data":"45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4"} Dec 05 06:49:33 crc kubenswrapper[4780]: I1205 06:49:33.450754 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"f2111d7a66441e4a5fb1d4b56d7eb9a5373847eff371b366ca8b672572b6996a"} Dec 05 06:49:33 crc kubenswrapper[4780]: I1205 06:49:33.452376 4780 generic.go:334] "Generic (PLEG): container finished" podID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerID="9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b" exitCode=0 Dec 05 06:49:33 crc kubenswrapper[4780]: I1205 06:49:33.452446 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljcn2" event={"ID":"623f84ec-99d6-44fc-8633-bf158d5b8dda","Type":"ContainerDied","Data":"9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b"} Dec 05 06:49:33 crc kubenswrapper[4780]: I1205 06:49:33.456635 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4zdl" event={"ID":"dd0145ad-560f-4192-b244-63c7c4b38748","Type":"ContainerStarted","Data":"7b4ecc4fb9e4ef9b9c22f202fad61b8255960d7d30bf4c9c58c419204daa834a"} Dec 05 06:49:33 crc kubenswrapper[4780]: I1205 06:49:33.558920 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b4zdl" podStartSLOduration=2.533203821 podStartE2EDuration="1m3.558902664s" podCreationTimestamp="2025-12-05 06:48:30 +0000 UTC" firstStartedPulling="2025-12-05 06:48:31.802113478 +0000 UTC m=+145.871629811" lastFinishedPulling="2025-12-05 06:49:32.827812302 +0000 UTC m=+206.897328654" observedRunningTime="2025-12-05 06:49:33.557396809 +0000 UTC m=+207.626913141" watchObservedRunningTime="2025-12-05 06:49:33.558902664 +0000 UTC m=+207.628418996" Dec 05 06:49:34 crc kubenswrapper[4780]: I1205 06:49:34.464935 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljcn2" event={"ID":"623f84ec-99d6-44fc-8633-bf158d5b8dda","Type":"ContainerStarted","Data":"16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa"} Dec 05 06:49:34 crc kubenswrapper[4780]: I1205 06:49:34.467528 4780 generic.go:334] "Generic (PLEG): container finished" podID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerID="a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4" exitCode=0 Dec 05 06:49:34 crc kubenswrapper[4780]: I1205 06:49:34.467703 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2tw" event={"ID":"7dccd32c-dbd1-45fb-8743-8ebd508423ad","Type":"ContainerDied","Data":"a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4"} Dec 05 06:49:34 crc kubenswrapper[4780]: I1205 06:49:34.473366 4780 generic.go:334] "Generic (PLEG): container finished" podID="9b591ab3-7725-4984-aa63-8057aadb595e" containerID="d4b799c36ff85449b1a3f5eec29cf6402dc7b473b2656a071b55172c29c48cd7" exitCode=0 Dec 05 06:49:34 crc kubenswrapper[4780]: I1205 06:49:34.477027 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvd4c" event={"ID":"9b591ab3-7725-4984-aa63-8057aadb595e","Type":"ContainerDied","Data":"d4b799c36ff85449b1a3f5eec29cf6402dc7b473b2656a071b55172c29c48cd7"} Dec 05 06:49:34 crc kubenswrapper[4780]: I1205 06:49:34.504037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhgg6" event={"ID":"a9b07063-6822-4f6a-ab0c-d6951daae0c3","Type":"ContainerStarted","Data":"18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d"} Dec 05 06:49:34 crc kubenswrapper[4780]: I1205 06:49:34.534976 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ljcn2" podStartSLOduration=3.109731194 podStartE2EDuration="1m7.534958928s" podCreationTimestamp="2025-12-05 06:48:27 +0000 UTC" firstStartedPulling="2025-12-05 06:48:29.729132757 +0000 UTC m=+143.798649089" lastFinishedPulling="2025-12-05 06:49:34.154360491 +0000 UTC m=+208.223876823" observedRunningTime="2025-12-05 06:49:34.515292419 +0000 UTC m=+208.584808771" watchObservedRunningTime="2025-12-05 06:49:34.534958928 +0000 UTC m=+208.604475260" Dec 05 06:49:34 crc kubenswrapper[4780]: I1205 06:49:34.555842 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhgg6" podStartSLOduration=2.45213368 podStartE2EDuration="1m6.555823114s" podCreationTimestamp="2025-12-05 06:48:28 +0000 UTC" firstStartedPulling="2025-12-05 06:48:29.718974465 +0000 UTC m=+143.788490807" lastFinishedPulling="2025-12-05 06:49:33.822663909 +0000 UTC m=+207.892180241" observedRunningTime="2025-12-05 06:49:34.552106702 +0000 UTC m=+208.621623044" watchObservedRunningTime="2025-12-05 06:49:34.555823114 +0000 UTC m=+208.625339456" Dec 05 06:49:35 crc kubenswrapper[4780]: I1205 06:49:35.510696 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2tw" event={"ID":"7dccd32c-dbd1-45fb-8743-8ebd508423ad","Type":"ContainerStarted","Data":"39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533"} Dec 05 06:49:35 crc kubenswrapper[4780]: I1205 06:49:35.512652 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvd4c" event={"ID":"9b591ab3-7725-4984-aa63-8057aadb595e","Type":"ContainerStarted","Data":"b14ae3b6eb511b3913d9ed65ce410c811e47201a462b688f1bc349eb49e81736"} Dec 05 06:49:35 crc kubenswrapper[4780]: I1205 06:49:35.529450 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5b2tw" podStartSLOduration=3.565140151 podStartE2EDuration="1m4.529433335s" podCreationTimestamp="2025-12-05 06:48:31 +0000 UTC" firstStartedPulling="2025-12-05 06:48:33.875641355 +0000 UTC m=+147.945157687" lastFinishedPulling="2025-12-05 06:49:34.839934539 +0000 UTC m=+208.909450871" observedRunningTime="2025-12-05 06:49:35.528150847 +0000 UTC m=+209.597667179" watchObservedRunningTime="2025-12-05 06:49:35.529433335 +0000 UTC m=+209.598949667" Dec 05 06:49:35 crc kubenswrapper[4780]: I1205 06:49:35.544599 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vvd4c" podStartSLOduration=3.54608654 podStartE2EDuration="1m4.544579439s" podCreationTimestamp="2025-12-05 06:48:31 +0000 UTC" firstStartedPulling="2025-12-05 06:48:33.884330456 +0000 UTC m=+147.953846788" lastFinishedPulling="2025-12-05 06:49:34.882823355 +0000 UTC m=+208.952339687" observedRunningTime="2025-12-05 06:49:35.54360429 +0000 UTC m=+209.613120622" watchObservedRunningTime="2025-12-05 06:49:35.544579439 +0000 UTC m=+209.614095771" Dec 05 06:49:37 crc kubenswrapper[4780]: I1205 06:49:37.778823 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmn88"] Dec 05 06:49:38 crc kubenswrapper[4780]: I1205 06:49:38.132078 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:49:38 crc kubenswrapper[4780]: I1205 06:49:38.132124 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:49:38 crc kubenswrapper[4780]: I1205 06:49:38.175422 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:49:38 crc kubenswrapper[4780]: I1205 06:49:38.740376 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:49:38 crc kubenswrapper[4780]: I1205 06:49:38.741835 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:49:38 crc kubenswrapper[4780]: I1205 06:49:38.795199 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:49:39 crc kubenswrapper[4780]: I1205 06:49:39.623607 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:49:40 crc kubenswrapper[4780]: I1205 06:49:40.754085 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:49:40 crc kubenswrapper[4780]: I1205 06:49:40.754131 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:49:40 crc kubenswrapper[4780]: I1205 06:49:40.791953 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:49:41 crc kubenswrapper[4780]: I1205 06:49:41.564818 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:49:41 crc kubenswrapper[4780]: I1205 06:49:41.565158 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:49:41 crc kubenswrapper[4780]: I1205 06:49:41.600700 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:49:41 crc kubenswrapper[4780]: I1205 06:49:41.632826 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:49:41 crc kubenswrapper[4780]: I1205 06:49:41.633539 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:49:42 crc kubenswrapper[4780]: I1205 06:49:42.021970 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:49:42 crc kubenswrapper[4780]: I1205 06:49:42.022022 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:49:42 crc kubenswrapper[4780]: I1205 06:49:42.058495 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:49:42 crc kubenswrapper[4780]: I1205 06:49:42.630950 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:49:44 crc kubenswrapper[4780]: I1205 06:49:44.309306 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4zdl"] Dec 05 06:49:44 crc kubenswrapper[4780]: I1205 06:49:44.309703 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b4zdl" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" containerName="registry-server" containerID="cri-o://7b4ecc4fb9e4ef9b9c22f202fad61b8255960d7d30bf4c9c58c419204daa834a" gracePeriod=2 Dec 05 06:49:44 crc kubenswrapper[4780]: I1205 06:49:44.906037 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvd4c"] Dec 05 06:49:44 crc kubenswrapper[4780]: I1205 06:49:44.906562 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vvd4c" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" containerName="registry-server" containerID="cri-o://b14ae3b6eb511b3913d9ed65ce410c811e47201a462b688f1bc349eb49e81736" gracePeriod=2 Dec 05 06:49:48 crc kubenswrapper[4780]: I1205 06:49:48.177784 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:49:48 crc kubenswrapper[4780]: I1205 06:49:48.633179 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvd4c_9b591ab3-7725-4984-aa63-8057aadb595e/registry-server/0.log" Dec 05 06:49:48 crc kubenswrapper[4780]: I1205 06:49:48.634046 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvd4c" event={"ID":"9b591ab3-7725-4984-aa63-8057aadb595e","Type":"ContainerDied","Data":"b14ae3b6eb511b3913d9ed65ce410c811e47201a462b688f1bc349eb49e81736"} Dec 05 06:49:48 crc kubenswrapper[4780]: I1205 06:49:48.634121 4780 generic.go:334] "Generic (PLEG): container finished" podID="9b591ab3-7725-4984-aa63-8057aadb595e" containerID="b14ae3b6eb511b3913d9ed65ce410c811e47201a462b688f1bc349eb49e81736" exitCode=137 Dec 05 06:49:48 crc kubenswrapper[4780]: I1205 06:49:48.637456 4780 generic.go:334] "Generic (PLEG): container finished" podID="dd0145ad-560f-4192-b244-63c7c4b38748" containerID="7b4ecc4fb9e4ef9b9c22f202fad61b8255960d7d30bf4c9c58c419204daa834a" exitCode=0 Dec 05 06:49:48 crc kubenswrapper[4780]: I1205 06:49:48.637572 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4zdl" event={"ID":"dd0145ad-560f-4192-b244-63c7c4b38748","Type":"ContainerDied","Data":"7b4ecc4fb9e4ef9b9c22f202fad61b8255960d7d30bf4c9c58c419204daa834a"} Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.237122 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.394217 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4vzm\" (UniqueName: \"kubernetes.io/projected/dd0145ad-560f-4192-b244-63c7c4b38748-kube-api-access-t4vzm\") pod \"dd0145ad-560f-4192-b244-63c7c4b38748\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.394379 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-catalog-content\") pod \"dd0145ad-560f-4192-b244-63c7c4b38748\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.394453 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-utilities\") pod \"dd0145ad-560f-4192-b244-63c7c4b38748\" (UID: \"dd0145ad-560f-4192-b244-63c7c4b38748\") " Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.395411 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-utilities" (OuterVolumeSpecName: "utilities") pod "dd0145ad-560f-4192-b244-63c7c4b38748" (UID: "dd0145ad-560f-4192-b244-63c7c4b38748"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.407166 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0145ad-560f-4192-b244-63c7c4b38748-kube-api-access-t4vzm" (OuterVolumeSpecName: "kube-api-access-t4vzm") pod "dd0145ad-560f-4192-b244-63c7c4b38748" (UID: "dd0145ad-560f-4192-b244-63c7c4b38748"). InnerVolumeSpecName "kube-api-access-t4vzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.411244 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd0145ad-560f-4192-b244-63c7c4b38748" (UID: "dd0145ad-560f-4192-b244-63c7c4b38748"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.495648 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4vzm\" (UniqueName: \"kubernetes.io/projected/dd0145ad-560f-4192-b244-63c7c4b38748-kube-api-access-t4vzm\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.495690 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.495703 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd0145ad-560f-4192-b244-63c7c4b38748-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.646801 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4zdl" event={"ID":"dd0145ad-560f-4192-b244-63c7c4b38748","Type":"ContainerDied","Data":"797573d292d604bd82aed639a66e622c1906145dc8b3a33f806cc32bd3f0543d"} Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.646858 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4zdl" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.646868 4780 scope.go:117] "RemoveContainer" containerID="7b4ecc4fb9e4ef9b9c22f202fad61b8255960d7d30bf4c9c58c419204daa834a" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.668270 4780 scope.go:117] "RemoveContainer" containerID="a178fa17eec9e422b8fcf3daa72eff9cfdf155d87c5303682967f9645adebc1c" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.675610 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4zdl"] Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.679952 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4zdl"] Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.702871 4780 scope.go:117] "RemoveContainer" containerID="72d9bd33842b9eb005a25316ff9d14faa4e730680c3296cebaad8a0e9d5c4513" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.851995 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvd4c_9b591ab3-7725-4984-aa63-8057aadb595e/registry-server/0.log" Dec 05 06:49:49 crc kubenswrapper[4780]: I1205 06:49:49.853243 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.002753 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-catalog-content\") pod \"9b591ab3-7725-4984-aa63-8057aadb595e\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.002797 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-utilities\") pod \"9b591ab3-7725-4984-aa63-8057aadb595e\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.002975 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9n2n\" (UniqueName: \"kubernetes.io/projected/9b591ab3-7725-4984-aa63-8057aadb595e-kube-api-access-t9n2n\") pod \"9b591ab3-7725-4984-aa63-8057aadb595e\" (UID: \"9b591ab3-7725-4984-aa63-8057aadb595e\") " Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.003661 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-utilities" (OuterVolumeSpecName: "utilities") pod "9b591ab3-7725-4984-aa63-8057aadb595e" (UID: "9b591ab3-7725-4984-aa63-8057aadb595e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.006299 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b591ab3-7725-4984-aa63-8057aadb595e-kube-api-access-t9n2n" (OuterVolumeSpecName: "kube-api-access-t9n2n") pod "9b591ab3-7725-4984-aa63-8057aadb595e" (UID: "9b591ab3-7725-4984-aa63-8057aadb595e"). InnerVolumeSpecName "kube-api-access-t9n2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.103710 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b591ab3-7725-4984-aa63-8057aadb595e" (UID: "9b591ab3-7725-4984-aa63-8057aadb595e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.104346 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9n2n\" (UniqueName: \"kubernetes.io/projected/9b591ab3-7725-4984-aa63-8057aadb595e-kube-api-access-t9n2n\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.104376 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.104388 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b591ab3-7725-4984-aa63-8057aadb595e-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.146199 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" path="/var/lib/kubelet/pods/dd0145ad-560f-4192-b244-63c7c4b38748/volumes" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196326 4780 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196594 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" containerName="extract-content" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196612 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" containerName="extract-content" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196621 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" containerName="extract-utilities" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196629 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" containerName="extract-utilities" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196651 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196658 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196669 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196675 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196691 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerName="extract-utilities" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196698 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerName="extract-utilities" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196705 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerName="extract-content" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196711 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerName="extract-content" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196722 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196728 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196737 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerName="extract-content" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196744 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerName="extract-content" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196751 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerName="extract-utilities" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196757 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerName="extract-utilities" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196768 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" containerName="extract-content" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196775 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" containerName="extract-content" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196784 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" containerName="extract-utilities" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196791 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" containerName="extract-utilities" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.196802 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196810 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196923 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f17411e-4523-4882-ae69-60b50d8dfd43" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196936 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d64414-49e8-4453-9d43-c0a53ac678ad" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196949 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0145ad-560f-4192-b244-63c7c4b38748" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.196959 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" containerName="registry-server" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.197323 4780 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.197461 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.197608 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b" gracePeriod=15 Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.197641 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc" gracePeriod=15 Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.197656 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7" gracePeriod=15 Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.197672 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6" gracePeriod=15 Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.197611 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864" gracePeriod=15 Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198508 4780 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.198661 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198673 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.198680 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198688 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.198698 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198706 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.198720 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198727 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.198742 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198750 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.198761 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198781 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 06:49:50 crc kubenswrapper[4780]: E1205 06:49:50.198792 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198801 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198971 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198986 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.198996 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.199006 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.199018 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.199223 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.308371 4780 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.308717 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.308603 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.308872 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.308939 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.308962 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.308985 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.309003 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.309082 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.309137 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.409978 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410024 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410046 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410065 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410082 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410122 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410138 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410159 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410167 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410208 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410167 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410217 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410192 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410232 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410261 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.410321 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.654692 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvd4c_9b591ab3-7725-4984-aa63-8057aadb595e/registry-server/0.log" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.655448 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvd4c" event={"ID":"9b591ab3-7725-4984-aa63-8057aadb595e","Type":"ContainerDied","Data":"0cf1e9c105231fa30e8d385a35a5718025939bb3700c44c6390779467411e4c6"} Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.655507 4780 scope.go:117] "RemoveContainer" containerID="b14ae3b6eb511b3913d9ed65ce410c811e47201a462b688f1bc349eb49e81736" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.655542 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvd4c" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.656294 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.656522 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.659542 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.659791 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.669021 4780 scope.go:117] "RemoveContainer" containerID="d4b799c36ff85449b1a3f5eec29cf6402dc7b473b2656a071b55172c29c48cd7" Dec 05 06:49:50 crc kubenswrapper[4780]: I1205 06:49:50.682812 4780 scope.go:117] "RemoveContainer" containerID="ce5e9ec50468ba4e638003d20fa244e11de66451de82ed55501773129383696c" Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.666503 4780 generic.go:334] "Generic (PLEG): container finished" podID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" containerID="d01d2fd2a5a7965a1626c65b8b4e9f0589aa62940731275edc4a4abd4154bd97" exitCode=0 Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.666587 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73c1f59b-ba4a-4d44-9be6-6166087ab9e3","Type":"ContainerDied","Data":"d01d2fd2a5a7965a1626c65b8b4e9f0589aa62940731275edc4a4abd4154bd97"} Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.668411 4780 status_manager.go:851] "Failed to get status for pod" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.668585 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.670011 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.671463 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.672129 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864" exitCode=0 Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.672162 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc" exitCode=0 Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.672175 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7" exitCode=0 Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.672215 4780 scope.go:117] "RemoveContainer" containerID="631b8d36ce34acbfd3918e132d20399dd3ad68c0a67b0b5a1c2dce42b04100ce" Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.672186 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6" exitCode=2 Dec 05 06:49:52 crc kubenswrapper[4780]: I1205 06:49:52.672253 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b" exitCode=0 Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.073423 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.074431 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.075104 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.075660 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.075897 4780 status_manager.go:851] "Failed to get status for pod" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.248482 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.248590 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.249165 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.249288 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.249368 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.249419 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.249746 4780 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.249778 4780 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.249792 4780 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.684783 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.685852 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.686003 4780 scope.go:117] "RemoveContainer" containerID="a0f781d7fd77235dd233e1af5d8782b91a02a352a068687e0a99c747925b5864" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.702724 4780 status_manager.go:851] "Failed to get status for pod" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.703031 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.703306 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.711287 4780 scope.go:117] "RemoveContainer" containerID="06a1924d24e77cc9adfd515e5467b762d1b2f4eb2d806645bba3a88589e2a9dc" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.727142 4780 scope.go:117] "RemoveContainer" containerID="177fb6ea8138b84c0bfe18f11569cdd0656f30a1c015a82d8d9c2c8351aa61e7" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.745280 4780 scope.go:117] "RemoveContainer" containerID="dae4dcca43e87a63b1c96495df34e6873a149af2521e5fedfae2c599dbc22bb6" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.762862 4780 scope.go:117] "RemoveContainer" containerID="081846cae4cff97d21d03da5cab4c6b480a75cf4903cff41025819f5c35e861b" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.776268 4780 scope.go:117] "RemoveContainer" containerID="a66d2b536d523f554b871f17a6160b0fcec08cdff0f4ab3119492868b2f05a5a" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.950759 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.951274 4780 status_manager.go:851] "Failed to get status for pod" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.951492 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:53 crc kubenswrapper[4780]: I1205 06:49:53.951648 4780 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.059984 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kube-api-access\") pod \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.060171 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-var-lock\") pod \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.060390 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-var-lock" (OuterVolumeSpecName: "var-lock") pod "73c1f59b-ba4a-4d44-9be6-6166087ab9e3" (UID: "73c1f59b-ba4a-4d44-9be6-6166087ab9e3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.060953 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kubelet-dir\") pod \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\" (UID: \"73c1f59b-ba4a-4d44-9be6-6166087ab9e3\") " Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.060993 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "73c1f59b-ba4a-4d44-9be6-6166087ab9e3" (UID: "73c1f59b-ba4a-4d44-9be6-6166087ab9e3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.061468 4780 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.061492 4780 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-var-lock\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.066245 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "73c1f59b-ba4a-4d44-9be6-6166087ab9e3" (UID: "73c1f59b-ba4a-4d44-9be6-6166087ab9e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.144075 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.162647 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73c1f59b-ba4a-4d44-9be6-6166087ab9e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.691271 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.691269 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73c1f59b-ba4a-4d44-9be6-6166087ab9e3","Type":"ContainerDied","Data":"04afc27d87375445c7d22d2e9c33b3349adb78cf4c03e3ea9e8ea4fd15e95964"} Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.692470 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04afc27d87375445c7d22d2e9c33b3349adb78cf4c03e3ea9e8ea4fd15e95964" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.696263 4780 status_manager.go:851] "Failed to get status for pod" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:54 crc kubenswrapper[4780]: I1205 06:49:54.696814 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:55 crc kubenswrapper[4780]: E1205 06:49:55.228862 4780 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:55 crc kubenswrapper[4780]: I1205 06:49:55.229637 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:55 crc kubenswrapper[4780]: W1205 06:49:55.247268 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f227e18c03c234101ae60fd6545e9534b7dcf53699d65410deb1a53172603e3f WatchSource:0}: Error finding container f227e18c03c234101ae60fd6545e9534b7dcf53699d65410deb1a53172603e3f: Status 404 returned error can't find the container with id f227e18c03c234101ae60fd6545e9534b7dcf53699d65410deb1a53172603e3f Dec 05 06:49:55 crc kubenswrapper[4780]: E1205 06:49:55.250072 4780 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e3f0521c06feb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 06:49:55.249459179 +0000 UTC m=+229.318975511,LastTimestamp:2025-12-05 06:49:55.249459179 +0000 UTC m=+229.318975511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 06:49:55 crc kubenswrapper[4780]: I1205 06:49:55.698126 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f227e18c03c234101ae60fd6545e9534b7dcf53699d65410deb1a53172603e3f"} Dec 05 06:49:56 crc kubenswrapper[4780]: I1205 06:49:56.141109 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:56 crc kubenswrapper[4780]: I1205 06:49:56.142274 4780 status_manager.go:851] "Failed to get status for pod" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:56 crc kubenswrapper[4780]: I1205 06:49:56.704923 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e"} Dec 05 06:49:56 crc kubenswrapper[4780]: E1205 06:49:56.705547 4780 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:56 crc kubenswrapper[4780]: I1205 06:49:56.705623 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:56 crc kubenswrapper[4780]: I1205 06:49:56.705849 4780 status_manager.go:851] "Failed to get status for pod" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:57 crc kubenswrapper[4780]: E1205 06:49:57.100868 4780 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e3f0521c06feb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 06:49:55.249459179 +0000 UTC m=+229.318975511,LastTimestamp:2025-12-05 06:49:55.249459179 +0000 UTC m=+229.318975511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 06:49:57 crc kubenswrapper[4780]: E1205 06:49:57.712165 4780 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:49:59 crc kubenswrapper[4780]: E1205 06:49:59.193033 4780 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:59 crc kubenswrapper[4780]: E1205 06:49:59.193690 4780 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:59 crc kubenswrapper[4780]: E1205 06:49:59.194062 4780 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:59 crc kubenswrapper[4780]: E1205 06:49:59.194420 4780 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:59 crc kubenswrapper[4780]: E1205 06:49:59.194770 4780 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:49:59 crc kubenswrapper[4780]: I1205 06:49:59.194817 4780 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 05 06:49:59 crc kubenswrapper[4780]: E1205 06:49:59.195154 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Dec 05 06:49:59 crc kubenswrapper[4780]: E1205 06:49:59.396513 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Dec 05 06:49:59 crc kubenswrapper[4780]: E1205 06:49:59.798058 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Dec 05 06:50:00 crc kubenswrapper[4780]: E1205 06:50:00.599281 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.138336 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.139627 4780 status_manager.go:851] "Failed to get status for pod" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.140092 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.155925 4780 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52f67f7f-0fdd-4d00-aebe-1b29ae739ff1" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.155971 4780 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52f67f7f-0fdd-4d00-aebe-1b29ae739ff1" Dec 05 06:50:02 crc kubenswrapper[4780]: E1205 06:50:02.156464 4780 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.157064 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:02 crc kubenswrapper[4780]: E1205 06:50:02.201134 4780 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.743941 4780 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="324e654d99ba65156c5621112e1650af9c39dbd4424cbecc8d177f3c5474e08f" exitCode=0 Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.744078 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"324e654d99ba65156c5621112e1650af9c39dbd4424cbecc8d177f3c5474e08f"} Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.744467 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7e90c5e0f18856c087e9667c5734d00885e8f726dc8ce937b2890738e4f71b9d"} Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.745007 4780 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52f67f7f-0fdd-4d00-aebe-1b29ae739ff1" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.745051 4780 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52f67f7f-0fdd-4d00-aebe-1b29ae739ff1" Dec 05 06:50:02 crc kubenswrapper[4780]: E1205 06:50:02.745543 4780 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.745804 4780 status_manager.go:851] "Failed to get status for pod" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.746338 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:50:02 crc kubenswrapper[4780]: I1205 06:50:02.804633 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" podUID="e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" containerName="oauth-openshift" containerID="cri-o://e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5" gracePeriod=15 Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.148559 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.149409 4780 status_manager.go:851] "Failed to get status for pod" podUID="e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-fmn88\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.150087 4780 status_manager.go:851] "Failed to get status for pod" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" pod="openshift-marketplace/redhat-operators-vvd4c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vvd4c\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.150446 4780 status_manager.go:851] "Failed to get status for pod" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.300634 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-ocp-branding-template\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301231 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-policies\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301276 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-login\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301370 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-provider-selection\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301409 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-service-ca\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301433 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-dir\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301459 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-error\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301485 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-cliconfig\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301513 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr26h\" (UniqueName: \"kubernetes.io/projected/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-kube-api-access-mr26h\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301549 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-trusted-ca-bundle\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301583 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-session\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301617 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-idp-0-file-data\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301652 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-router-certs\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.301701 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-serving-cert\") pod \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\" (UID: \"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5\") " Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.302009 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.302305 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.303135 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.303254 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.303329 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.309451 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.309829 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.310220 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.310970 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.311002 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.311501 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.312242 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.312790 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-kube-api-access-mr26h" (OuterVolumeSpecName: "kube-api-access-mr26h") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "kube-api-access-mr26h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.326026 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" (UID: "e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.402815 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.402868 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.402905 4780 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.402920 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.402937 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.402952 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.402965 4780 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.402977 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.402989 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.403012 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr26h\" (UniqueName: \"kubernetes.io/projected/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-kube-api-access-mr26h\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.403025 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.403038 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.403053 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.403064 4780 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.762220 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"65bef09952ab021035e6ffb88c851c5fc481863fb2bdd6b55032d062f9f66798"} Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.762279 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2531ac96b4e907f51f968db3934e698d9a5c78c22aa5ce0fa906d725bb444436"} Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.762296 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9ef5a35dab78d07dd55f25f4a63911ee3c11c228683723fc6e97ab54e982958c"} Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.762309 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2bab049661f0a698d1698bc0604a6d463f8af22227c563d72c5557063dc6c932"} Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.763972 4780 generic.go:334] "Generic (PLEG): container finished" podID="e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" containerID="e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5" exitCode=0 Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.764012 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" event={"ID":"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5","Type":"ContainerDied","Data":"e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5"} Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.764029 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" event={"ID":"e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5","Type":"ContainerDied","Data":"f3b59590c806ab27b005de47b2d3d18f56624fba3ad710a49623190bd522e5cf"} Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.764047 4780 scope.go:117] "RemoveContainer" containerID="e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.764177 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmn88" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.791645 4780 scope.go:117] "RemoveContainer" containerID="e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5" Dec 05 06:50:03 crc kubenswrapper[4780]: E1205 06:50:03.792154 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5\": container with ID starting with e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5 not found: ID does not exist" containerID="e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5" Dec 05 06:50:03 crc kubenswrapper[4780]: I1205 06:50:03.792198 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5"} err="failed to get container status \"e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5\": rpc error: code = NotFound desc = could not find container \"e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5\": container with ID starting with e7db1acda6506ecafad49fabafc3df5bb2d9b70d4b487c0e718d707246ad8fc5 not found: ID does not exist" Dec 05 06:50:04 crc kubenswrapper[4780]: I1205 06:50:04.772427 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b590e48fbef050604af09928f9bd213ead46a36ad4ec48ebb055d7bb4049f068"} Dec 05 06:50:04 crc kubenswrapper[4780]: I1205 06:50:04.772747 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:04 crc kubenswrapper[4780]: I1205 06:50:04.772679 4780 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52f67f7f-0fdd-4d00-aebe-1b29ae739ff1" Dec 05 06:50:04 crc kubenswrapper[4780]: I1205 06:50:04.772779 4780 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52f67f7f-0fdd-4d00-aebe-1b29ae739ff1" Dec 05 06:50:05 crc kubenswrapper[4780]: I1205 06:50:05.781488 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 05 06:50:05 crc kubenswrapper[4780]: I1205 06:50:05.781539 4780 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399" exitCode=1 Dec 05 06:50:05 crc kubenswrapper[4780]: I1205 06:50:05.781580 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399"} Dec 05 06:50:05 crc kubenswrapper[4780]: I1205 06:50:05.782122 4780 scope.go:117] "RemoveContainer" containerID="867fa16e1832f7c9d5d0b6e8d3461a95aa0cdc73611c8cfc69f5a90e82f4a399" Dec 05 06:50:06 crc kubenswrapper[4780]: I1205 06:50:06.791147 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 05 06:50:06 crc kubenswrapper[4780]: I1205 06:50:06.791574 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"607cacd485fd37b9c3ef70d2403bece572677d09ce4642cc0545ad3e74943170"} Dec 05 06:50:06 crc kubenswrapper[4780]: I1205 06:50:06.947956 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:50:07 crc kubenswrapper[4780]: I1205 06:50:07.157339 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:07 crc kubenswrapper[4780]: I1205 06:50:07.157399 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:07 crc kubenswrapper[4780]: I1205 06:50:07.162433 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:09 crc kubenswrapper[4780]: I1205 06:50:09.781974 4780 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:09 crc kubenswrapper[4780]: I1205 06:50:09.806388 4780 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52f67f7f-0fdd-4d00-aebe-1b29ae739ff1" Dec 05 06:50:09 crc kubenswrapper[4780]: I1205 06:50:09.806427 4780 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52f67f7f-0fdd-4d00-aebe-1b29ae739ff1" Dec 05 06:50:09 crc kubenswrapper[4780]: I1205 06:50:09.810050 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:09 crc kubenswrapper[4780]: I1205 06:50:09.811662 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="86a4d6d9-b74a-4b58-ae75-81b5639dc5eb" Dec 05 06:50:10 crc kubenswrapper[4780]: I1205 06:50:10.812708 4780 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52f67f7f-0fdd-4d00-aebe-1b29ae739ff1" Dec 05 06:50:10 crc kubenswrapper[4780]: I1205 06:50:10.812759 4780 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52f67f7f-0fdd-4d00-aebe-1b29ae739ff1" Dec 05 06:50:12 crc kubenswrapper[4780]: I1205 06:50:12.089931 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:50:12 crc kubenswrapper[4780]: I1205 06:50:12.093702 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:50:16 crc kubenswrapper[4780]: I1205 06:50:16.155202 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="86a4d6d9-b74a-4b58-ae75-81b5639dc5eb" Dec 05 06:50:16 crc kubenswrapper[4780]: I1205 06:50:16.953471 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 06:50:19 crc kubenswrapper[4780]: I1205 06:50:19.584242 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 05 06:50:20 crc kubenswrapper[4780]: I1205 06:50:20.729854 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 06:50:20 crc kubenswrapper[4780]: I1205 06:50:20.885493 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 05 06:50:21 crc kubenswrapper[4780]: I1205 06:50:21.041158 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 06:50:21 crc kubenswrapper[4780]: I1205 06:50:21.393395 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 05 06:50:21 crc kubenswrapper[4780]: I1205 06:50:21.759308 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 05 06:50:21 crc kubenswrapper[4780]: I1205 06:50:21.892454 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 05 06:50:21 crc kubenswrapper[4780]: I1205 06:50:21.940409 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 05 06:50:22 crc kubenswrapper[4780]: I1205 06:50:22.006710 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 06:50:22 crc kubenswrapper[4780]: I1205 06:50:22.072857 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 06:50:22 crc kubenswrapper[4780]: I1205 06:50:22.228237 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 06:50:22 crc kubenswrapper[4780]: I1205 06:50:22.467676 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 05 06:50:22 crc kubenswrapper[4780]: I1205 06:50:22.618868 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 05 06:50:22 crc kubenswrapper[4780]: I1205 06:50:22.633973 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 05 06:50:22 crc kubenswrapper[4780]: I1205 06:50:22.667015 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 05 06:50:23 crc kubenswrapper[4780]: I1205 06:50:23.000800 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 05 06:50:23 crc kubenswrapper[4780]: I1205 06:50:23.176868 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 05 06:50:23 crc kubenswrapper[4780]: I1205 06:50:23.232802 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 06:50:23 crc kubenswrapper[4780]: I1205 06:50:23.432493 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 05 06:50:23 crc kubenswrapper[4780]: I1205 06:50:23.907621 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 05 06:50:24 crc kubenswrapper[4780]: I1205 06:50:24.060013 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 05 06:50:24 crc kubenswrapper[4780]: I1205 06:50:24.127036 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 05 06:50:24 crc kubenswrapper[4780]: I1205 06:50:24.425767 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 05 06:50:24 crc kubenswrapper[4780]: I1205 06:50:24.574187 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 05 06:50:24 crc kubenswrapper[4780]: I1205 06:50:24.653613 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 05 06:50:24 crc kubenswrapper[4780]: I1205 06:50:24.722302 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 06:50:24 crc kubenswrapper[4780]: I1205 06:50:24.761194 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 05 06:50:24 crc kubenswrapper[4780]: I1205 06:50:24.800472 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 05 06:50:24 crc kubenswrapper[4780]: I1205 06:50:24.880637 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 06:50:24 crc kubenswrapper[4780]: I1205 06:50:24.923631 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.126212 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.220191 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.269042 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.351024 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.375247 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.414233 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.645541 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.667704 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.757217 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.763527 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 05 06:50:25 crc kubenswrapper[4780]: I1205 06:50:25.886454 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.052824 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.073834 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.100433 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.116925 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.124019 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.285341 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.415431 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.426743 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.467658 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.507175 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.519389 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.569991 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.594782 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.614911 4780 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.624639 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vvd4c","openshift-authentication/oauth-openshift-558db77b4-fmn88","openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.624801 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.629359 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.647804 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.647777826 podStartE2EDuration="17.647777826s" podCreationTimestamp="2025-12-05 06:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:50:26.641519428 +0000 UTC m=+260.711035760" watchObservedRunningTime="2025-12-05 06:50:26.647777826 +0000 UTC m=+260.717294178" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.652160 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.730244 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.770377 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.804156 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.824245 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.831848 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.836362 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.836390 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.865956 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.870481 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 05 06:50:26 crc kubenswrapper[4780]: I1205 06:50:26.931785 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.047016 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.052050 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.083277 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.198590 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.225019 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.262419 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.288744 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.324489 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.362626 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.379687 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.421088 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.458750 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.480317 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.499616 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.515202 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.518343 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.566537 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.580071 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.581946 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.595618 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.607182 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.683740 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.719307 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.813785 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.852545 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.866991 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.871264 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.922815 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.945005 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.980562 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 05 06:50:27 crc kubenswrapper[4780]: I1205 06:50:27.992960 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.024461 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.121841 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.133626 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.145979 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b591ab3-7725-4984-aa63-8057aadb595e" path="/var/lib/kubelet/pods/9b591ab3-7725-4984-aa63-8057aadb595e/volumes" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.146647 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" path="/var/lib/kubelet/pods/e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5/volumes" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.331337 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.575975 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.737336 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.859926 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.885310 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 06:50:28 crc kubenswrapper[4780]: I1205 06:50:28.916260 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.052789 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.091461 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.154134 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.172110 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.192151 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.236517 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.244515 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.249045 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.356314 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.387569 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.582498 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.590135 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.627808 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.635108 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.707222 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.837445 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.867913 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.879083 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.952772 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.969501 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 05 06:50:29 crc kubenswrapper[4780]: I1205 06:50:29.987023 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.034054 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.077616 4780 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.089698 4780 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.194705 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.194923 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.217116 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.270650 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.454385 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.530091 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.530478 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.593065 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.606678 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.609671 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.744983 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.813184 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.860004 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.931235 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.955357 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 05 06:50:30 crc kubenswrapper[4780]: I1205 06:50:30.998325 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.109617 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.130293 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.141723 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.183808 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.227670 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.449823 4780 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.464964 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.533963 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.591937 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.663817 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.773512 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 05 06:50:31 crc kubenswrapper[4780]: I1205 06:50:31.960615 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.006675 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.037977 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.080754 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56c495df99-b9rjl"] Dec 05 06:50:32 crc kubenswrapper[4780]: E1205 06:50:32.081047 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" containerName="installer" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.081063 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" containerName="installer" Dec 05 06:50:32 crc kubenswrapper[4780]: E1205 06:50:32.081078 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" containerName="oauth-openshift" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.081086 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" containerName="oauth-openshift" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.081224 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c1f59b-ba4a-4d44-9be6-6166087ab9e3" containerName="installer" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.081243 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d24dd0-5bdb-4ae7-971f-8cb91aad45f5" containerName="oauth-openshift" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.081719 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.083099 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.085407 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.086046 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.086314 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.086358 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.086533 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.087007 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.087035 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.087036 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.087020 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.087186 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.087932 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.092575 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.093698 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.098050 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.098079 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.103902 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c495df99-b9rjl"] Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.112262 4780 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.112603 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e" gracePeriod=5 Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.127486 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.129213 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.159588 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.159701 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9b6\" (UniqueName: \"kubernetes.io/projected/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-kube-api-access-8n9b6\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.159739 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.159925 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.159982 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-session\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.160014 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.160067 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-template-login\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.160096 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-audit-dir\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.160157 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.160182 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-template-error\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.160269 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.160325 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.160363 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.160428 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-audit-policies\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.261010 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.261328 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9b6\" (UniqueName: \"kubernetes.io/projected/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-kube-api-access-8n9b6\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.261431 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.261533 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.261635 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-session\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.261749 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.261867 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-template-login\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.262562 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-audit-dir\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.262703 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.262794 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-template-error\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.262891 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.262977 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.263058 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.263170 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-audit-policies\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.263744 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-audit-policies\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.262607 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-audit-dir\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.262309 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.264286 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.264818 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.266147 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.266817 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.268479 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-template-error\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.268778 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.268912 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.269237 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-session\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.277317 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-user-template-login\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.277789 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.281272 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9b6\" (UniqueName: \"kubernetes.io/projected/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2-kube-api-access-8n9b6\") pod \"oauth-openshift-56c495df99-b9rjl\" (UID: \"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\") " pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.326364 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.368715 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.382478 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.401565 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.455508 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.525527 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.538052 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.746142 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 05 06:50:32 crc kubenswrapper[4780]: I1205 06:50:32.946371 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.053584 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.054704 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.113209 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.223418 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.268729 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.300395 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.373499 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.375337 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.396096 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.466825 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.489102 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.555752 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.617268 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.664921 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.683672 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.696045 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.812275 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 05 06:50:33 crc kubenswrapper[4780]: I1205 06:50:33.957183 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.009944 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.116200 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.121819 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.355242 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.371695 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.382059 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.407837 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.492820 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.533834 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.541275 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.625987 4780 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.730011 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.756366 4780 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.782805 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.805524 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.822178 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 06:50:34 crc kubenswrapper[4780]: I1205 06:50:34.850155 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.123344 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.125066 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.177126 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.196441 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.280508 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 05 06:50:35 crc kubenswrapper[4780]: E1205 06:50:35.312493 4780 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 05 06:50:35 crc kubenswrapper[4780]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-56c495df99-b9rjl_openshift-authentication_c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2_0(369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18): error adding pod openshift-authentication_oauth-openshift-56c495df99-b9rjl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18" Netns:"/var/run/netns/fae5c187-a6dc-49ce-83eb-8e2657a43045" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-56c495df99-b9rjl;K8S_POD_INFRA_CONTAINER_ID=369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18;K8S_POD_UID=c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-56c495df99-b9rjl] networking: Multus: [openshift-authentication/oauth-openshift-56c495df99-b9rjl/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-56c495df99-b9rjl in out of cluster comm: pod "oauth-openshift-56c495df99-b9rjl" not found Dec 05 06:50:35 crc kubenswrapper[4780]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 06:50:35 crc kubenswrapper[4780]: > Dec 05 06:50:35 crc kubenswrapper[4780]: E1205 06:50:35.312570 4780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 05 06:50:35 crc kubenswrapper[4780]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-56c495df99-b9rjl_openshift-authentication_c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2_0(369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18): error adding pod openshift-authentication_oauth-openshift-56c495df99-b9rjl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18" Netns:"/var/run/netns/fae5c187-a6dc-49ce-83eb-8e2657a43045" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-56c495df99-b9rjl;K8S_POD_INFRA_CONTAINER_ID=369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18;K8S_POD_UID=c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-56c495df99-b9rjl] networking: Multus: [openshift-authentication/oauth-openshift-56c495df99-b9rjl/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-56c495df99-b9rjl in out of cluster comm: pod "oauth-openshift-56c495df99-b9rjl" not found Dec 05 06:50:35 crc kubenswrapper[4780]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 06:50:35 crc kubenswrapper[4780]: > pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:35 crc kubenswrapper[4780]: E1205 06:50:35.312594 4780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 05 06:50:35 crc kubenswrapper[4780]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-56c495df99-b9rjl_openshift-authentication_c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2_0(369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18): error adding pod openshift-authentication_oauth-openshift-56c495df99-b9rjl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18" Netns:"/var/run/netns/fae5c187-a6dc-49ce-83eb-8e2657a43045" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-56c495df99-b9rjl;K8S_POD_INFRA_CONTAINER_ID=369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18;K8S_POD_UID=c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-56c495df99-b9rjl] networking: Multus: [openshift-authentication/oauth-openshift-56c495df99-b9rjl/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-56c495df99-b9rjl in out of cluster comm: pod "oauth-openshift-56c495df99-b9rjl" not found Dec 05 06:50:35 crc kubenswrapper[4780]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 06:50:35 crc kubenswrapper[4780]: > pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:35 crc kubenswrapper[4780]: E1205 06:50:35.312653 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-56c495df99-b9rjl_openshift-authentication(c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-56c495df99-b9rjl_openshift-authentication(c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-56c495df99-b9rjl_openshift-authentication_c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2_0(369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18): error adding pod openshift-authentication_oauth-openshift-56c495df99-b9rjl to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18\\\" Netns:\\\"/var/run/netns/fae5c187-a6dc-49ce-83eb-8e2657a43045\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-56c495df99-b9rjl;K8S_POD_INFRA_CONTAINER_ID=369f1f6213aad2e1fefc8f1489aa0ae3a591c8972344d322898bc7bac4cd9f18;K8S_POD_UID=c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-56c495df99-b9rjl] networking: Multus: [openshift-authentication/oauth-openshift-56c495df99-b9rjl/c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-56c495df99-b9rjl in out of cluster comm: pod \\\"oauth-openshift-56c495df99-b9rjl\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" podUID="c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.327958 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.359147 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.550065 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.564965 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.588947 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.740619 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.864553 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.875945 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.889382 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.895655 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.922324 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.948725 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.949205 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:35 crc kubenswrapper[4780]: I1205 06:50:35.965017 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 05 06:50:36 crc kubenswrapper[4780]: I1205 06:50:36.129434 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 06:50:36 crc kubenswrapper[4780]: I1205 06:50:36.150826 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 05 06:50:36 crc kubenswrapper[4780]: I1205 06:50:36.275480 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 06:50:36 crc kubenswrapper[4780]: I1205 06:50:36.365770 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 05 06:50:36 crc kubenswrapper[4780]: I1205 06:50:36.465983 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 06:50:36 crc kubenswrapper[4780]: I1205 06:50:36.470249 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 05 06:50:36 crc kubenswrapper[4780]: I1205 06:50:36.625210 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 06:50:36 crc kubenswrapper[4780]: I1205 06:50:36.735941 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c495df99-b9rjl"] Dec 05 06:50:36 crc kubenswrapper[4780]: I1205 06:50:36.747348 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 05 06:50:36 crc kubenswrapper[4780]: I1205 06:50:36.956044 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" event={"ID":"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2","Type":"ContainerStarted","Data":"30d100b9f438d8e2007ace1e053392ea5708bbdd73d68c3ab2f4ee20d9c875f1"} Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.079135 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.274907 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.688168 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.688551 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.706803 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.842708 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.842779 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.842799 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.842823 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.842858 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.843005 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.843060 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.843120 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.843367 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.843672 4780 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.843688 4780 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.843698 4780 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.843706 4780 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.850110 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.944368 4780 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.964089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" event={"ID":"c0b4dbb3-4a54-4392-b8dd-6d7e2c509ee2","Type":"ContainerStarted","Data":"8eb4e80bc5631ca8e41d5c55ae5980f5b04c98dc30de3387e9d16384613fcf68"} Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.964542 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.965651 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.965713 4780 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e" exitCode=137 Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.965769 4780 scope.go:117] "RemoveContainer" containerID="4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.965837 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.969042 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.986509 4780 scope.go:117] "RemoveContainer" containerID="4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e" Dec 05 06:50:37 crc kubenswrapper[4780]: E1205 06:50:37.986957 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e\": container with ID starting with 4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e not found: ID does not exist" containerID="4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e" Dec 05 06:50:37 crc kubenswrapper[4780]: I1205 06:50:37.987014 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e"} err="failed to get container status \"4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e\": rpc error: code = NotFound desc = could not find container \"4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e\": container with ID starting with 4b31860363090645306e2c2fb6128f99e9a432655652324ccdf641f06024486e not found: ID does not exist" Dec 05 06:50:38 crc kubenswrapper[4780]: I1205 06:50:37.998063 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56c495df99-b9rjl" podStartSLOduration=60.998038372 podStartE2EDuration="1m0.998038372s" podCreationTimestamp="2025-12-05 06:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:50:37.990276358 +0000 UTC m=+272.059792690" watchObservedRunningTime="2025-12-05 06:50:37.998038372 +0000 UTC m=+272.067554704" Dec 05 06:50:38 crc kubenswrapper[4780]: I1205 06:50:38.127286 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 05 06:50:38 crc kubenswrapper[4780]: I1205 06:50:38.145170 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 05 06:50:38 crc kubenswrapper[4780]: I1205 06:50:38.653406 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 05 06:50:38 crc kubenswrapper[4780]: I1205 06:50:38.840854 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.443502 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-577gd"] Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.445066 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" podUID="881d3d6f-e692-4c33-b3fd-8bdba759d80d" containerName="controller-manager" containerID="cri-o://69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b" gracePeriod=30 Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.549811 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv"] Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.550024 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" podUID="33c802e4-3a06-427d-8b1b-58f4ca6d4e9d" containerName="route-controller-manager" containerID="cri-o://b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b" gracePeriod=30 Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.842593 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.911449 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.929026 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9www\" (UniqueName: \"kubernetes.io/projected/881d3d6f-e692-4c33-b3fd-8bdba759d80d-kube-api-access-w9www\") pod \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.929147 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-proxy-ca-bundles\") pod \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.929220 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-config\") pod \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.929256 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgrwj\" (UniqueName: \"kubernetes.io/projected/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-kube-api-access-mgrwj\") pod \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.929296 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-client-ca\") pod \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.929318 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-client-ca\") pod \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.929349 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881d3d6f-e692-4c33-b3fd-8bdba759d80d-serving-cert\") pod \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.929377 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-serving-cert\") pod \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\" (UID: \"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d\") " Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.929515 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-config\") pod \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\" (UID: \"881d3d6f-e692-4c33-b3fd-8bdba759d80d\") " Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.931369 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-client-ca" (OuterVolumeSpecName: "client-ca") pod "33c802e4-3a06-427d-8b1b-58f4ca6d4e9d" (UID: "33c802e4-3a06-427d-8b1b-58f4ca6d4e9d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.931480 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-config" (OuterVolumeSpecName: "config") pod "33c802e4-3a06-427d-8b1b-58f4ca6d4e9d" (UID: "33c802e4-3a06-427d-8b1b-58f4ca6d4e9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.931954 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "881d3d6f-e692-4c33-b3fd-8bdba759d80d" (UID: "881d3d6f-e692-4c33-b3fd-8bdba759d80d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.932022 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-config" (OuterVolumeSpecName: "config") pod "881d3d6f-e692-4c33-b3fd-8bdba759d80d" (UID: "881d3d6f-e692-4c33-b3fd-8bdba759d80d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.932406 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-client-ca" (OuterVolumeSpecName: "client-ca") pod "881d3d6f-e692-4c33-b3fd-8bdba759d80d" (UID: "881d3d6f-e692-4c33-b3fd-8bdba759d80d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.937321 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "33c802e4-3a06-427d-8b1b-58f4ca6d4e9d" (UID: "33c802e4-3a06-427d-8b1b-58f4ca6d4e9d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.937496 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-kube-api-access-mgrwj" (OuterVolumeSpecName: "kube-api-access-mgrwj") pod "33c802e4-3a06-427d-8b1b-58f4ca6d4e9d" (UID: "33c802e4-3a06-427d-8b1b-58f4ca6d4e9d"). InnerVolumeSpecName "kube-api-access-mgrwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.937503 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881d3d6f-e692-4c33-b3fd-8bdba759d80d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "881d3d6f-e692-4c33-b3fd-8bdba759d80d" (UID: "881d3d6f-e692-4c33-b3fd-8bdba759d80d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:50:59 crc kubenswrapper[4780]: I1205 06:50:59.939125 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881d3d6f-e692-4c33-b3fd-8bdba759d80d-kube-api-access-w9www" (OuterVolumeSpecName: "kube-api-access-w9www") pod "881d3d6f-e692-4c33-b3fd-8bdba759d80d" (UID: "881d3d6f-e692-4c33-b3fd-8bdba759d80d"). InnerVolumeSpecName "kube-api-access-w9www". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.030994 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.031025 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.031033 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881d3d6f-e692-4c33-b3fd-8bdba759d80d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.031041 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.031050 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.031059 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9www\" (UniqueName: \"kubernetes.io/projected/881d3d6f-e692-4c33-b3fd-8bdba759d80d-kube-api-access-w9www\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.031067 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/881d3d6f-e692-4c33-b3fd-8bdba759d80d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.031075 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.031084 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgrwj\" (UniqueName: \"kubernetes.io/projected/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d-kube-api-access-mgrwj\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.080218 4780 generic.go:334] "Generic (PLEG): container finished" podID="33c802e4-3a06-427d-8b1b-58f4ca6d4e9d" containerID="b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b" exitCode=0 Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.080276 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" event={"ID":"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d","Type":"ContainerDied","Data":"b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b"} Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.080693 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" event={"ID":"33c802e4-3a06-427d-8b1b-58f4ca6d4e9d","Type":"ContainerDied","Data":"83ea97022b8d10f78f70f4d0a2abad3893eeb466329fdc399f5da9205de726cb"} Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.080711 4780 scope.go:117] "RemoveContainer" containerID="b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.080291 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.083330 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.083397 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" event={"ID":"881d3d6f-e692-4c33-b3fd-8bdba759d80d","Type":"ContainerDied","Data":"69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b"} Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.083269 4780 generic.go:334] "Generic (PLEG): container finished" podID="881d3d6f-e692-4c33-b3fd-8bdba759d80d" containerID="69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b" exitCode=0 Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.086121 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" event={"ID":"881d3d6f-e692-4c33-b3fd-8bdba759d80d","Type":"ContainerDied","Data":"69bb155eb4e4779ca918273a9eb261095f0c573d7513fe55a39700ff2a25d6c8"} Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.100505 4780 scope.go:117] "RemoveContainer" containerID="b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b" Dec 05 06:51:00 crc kubenswrapper[4780]: E1205 06:51:00.102200 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b\": container with ID starting with b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b not found: ID does not exist" containerID="b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.102282 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b"} err="failed to get container status \"b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b\": rpc error: code = NotFound desc = could not find container \"b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b\": container with ID starting with b41c2d65589589f08c507b35db65ce00d10c84dd5e8078a692ca16c371ed954b not found: ID does not exist" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.102334 4780 scope.go:117] "RemoveContainer" containerID="69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.122525 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv"] Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.125763 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vxxmv"] Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.135421 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-577gd"] Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.137071 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-577gd"] Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.142246 4780 scope.go:117] "RemoveContainer" containerID="69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b" Dec 05 06:51:00 crc kubenswrapper[4780]: E1205 06:51:00.142993 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b\": container with ID starting with 69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b not found: ID does not exist" containerID="69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.143021 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b"} err="failed to get container status \"69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b\": rpc error: code = NotFound desc = could not find container \"69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b\": container with ID starting with 69d37584a0eca39e7160bf785a08c53f3bee3a6cca9a47410f880007bc3d516b not found: ID does not exist" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.150531 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33c802e4-3a06-427d-8b1b-58f4ca6d4e9d" path="/var/lib/kubelet/pods/33c802e4-3a06-427d-8b1b-58f4ca6d4e9d/volumes" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.151051 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881d3d6f-e692-4c33-b3fd-8bdba759d80d" path="/var/lib/kubelet/pods/881d3d6f-e692-4c33-b3fd-8bdba759d80d/volumes" Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.707572 4780 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-577gd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 06:51:00 crc kubenswrapper[4780]: I1205 06:51:00.707624 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-577gd" podUID="881d3d6f-e692-4c33-b3fd-8bdba759d80d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.626188 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76b976544-wtqx6"] Dec 05 06:51:01 crc kubenswrapper[4780]: E1205 06:51:01.626501 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.626521 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 06:51:01 crc kubenswrapper[4780]: E1205 06:51:01.626546 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881d3d6f-e692-4c33-b3fd-8bdba759d80d" containerName="controller-manager" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.626559 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="881d3d6f-e692-4c33-b3fd-8bdba759d80d" containerName="controller-manager" Dec 05 06:51:01 crc kubenswrapper[4780]: E1205 06:51:01.626587 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c802e4-3a06-427d-8b1b-58f4ca6d4e9d" containerName="route-controller-manager" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.626599 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c802e4-3a06-427d-8b1b-58f4ca6d4e9d" containerName="route-controller-manager" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.626766 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c802e4-3a06-427d-8b1b-58f4ca6d4e9d" containerName="route-controller-manager" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.626787 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="881d3d6f-e692-4c33-b3fd-8bdba759d80d" containerName="controller-manager" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.626801 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.627353 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.631148 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.631203 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.631300 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.631577 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.631614 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.631832 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.632121 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r"] Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.633100 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.635517 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.635831 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.635870 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.635957 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.636246 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.637375 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.648024 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.649822 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r"] Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.655809 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50afb3df-029c-4140-b767-b17c043114a1-serving-cert\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.655993 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-config\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.656034 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-config\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.656161 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlmqz\" (UniqueName: \"kubernetes.io/projected/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-kube-api-access-rlmqz\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.656210 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8fzf\" (UniqueName: \"kubernetes.io/projected/50afb3df-029c-4140-b767-b17c043114a1-kube-api-access-v8fzf\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.656249 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-client-ca\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.656279 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-proxy-ca-bundles\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.656326 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-client-ca\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.656363 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-serving-cert\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.659379 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76b976544-wtqx6"] Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.757476 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-client-ca\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.757517 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-serving-cert\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.757547 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50afb3df-029c-4140-b767-b17c043114a1-serving-cert\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.758347 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-config\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.758371 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-config\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.758388 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-client-ca\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.758396 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlmqz\" (UniqueName: \"kubernetes.io/projected/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-kube-api-access-rlmqz\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.758472 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8fzf\" (UniqueName: \"kubernetes.io/projected/50afb3df-029c-4140-b767-b17c043114a1-kube-api-access-v8fzf\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.758513 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-client-ca\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.758539 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-proxy-ca-bundles\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.759408 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-client-ca\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.759496 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-proxy-ca-bundles\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.759576 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-config\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.760575 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-config\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.764727 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-serving-cert\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.770343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50afb3df-029c-4140-b767-b17c043114a1-serving-cert\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.781549 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlmqz\" (UniqueName: \"kubernetes.io/projected/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-kube-api-access-rlmqz\") pod \"controller-manager-76b976544-wtqx6\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.781726 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8fzf\" (UniqueName: \"kubernetes.io/projected/50afb3df-029c-4140-b767-b17c043114a1-kube-api-access-v8fzf\") pod \"route-controller-manager-8d8c54746-4q88r\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.949278 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:01 crc kubenswrapper[4780]: I1205 06:51:01.961496 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:02 crc kubenswrapper[4780]: I1205 06:51:02.394377 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r"] Dec 05 06:51:02 crc kubenswrapper[4780]: W1205 06:51:02.407266 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89bd5e5b_70fa_49b4_b44b_df2d92be8b83.slice/crio-2d5e2883f20bbf2dbb2fd769039db8c1c7b3c5c6dc57ca8e518705347af972d8 WatchSource:0}: Error finding container 2d5e2883f20bbf2dbb2fd769039db8c1c7b3c5c6dc57ca8e518705347af972d8: Status 404 returned error can't find the container with id 2d5e2883f20bbf2dbb2fd769039db8c1c7b3c5c6dc57ca8e518705347af972d8 Dec 05 06:51:02 crc kubenswrapper[4780]: I1205 06:51:02.408614 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76b976544-wtqx6"] Dec 05 06:51:03 crc kubenswrapper[4780]: I1205 06:51:03.108456 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" event={"ID":"89bd5e5b-70fa-49b4-b44b-df2d92be8b83","Type":"ContainerStarted","Data":"2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b"} Dec 05 06:51:03 crc kubenswrapper[4780]: I1205 06:51:03.108806 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" event={"ID":"89bd5e5b-70fa-49b4-b44b-df2d92be8b83","Type":"ContainerStarted","Data":"2d5e2883f20bbf2dbb2fd769039db8c1c7b3c5c6dc57ca8e518705347af972d8"} Dec 05 06:51:03 crc kubenswrapper[4780]: I1205 06:51:03.108826 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:03 crc kubenswrapper[4780]: I1205 06:51:03.110432 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" event={"ID":"50afb3df-029c-4140-b767-b17c043114a1","Type":"ContainerStarted","Data":"68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5"} Dec 05 06:51:03 crc kubenswrapper[4780]: I1205 06:51:03.110477 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" event={"ID":"50afb3df-029c-4140-b767-b17c043114a1","Type":"ContainerStarted","Data":"b6e1973b4b938121a8d3c59bf4124dc990bacc4d0bf76904e1edddcda3e3173b"} Dec 05 06:51:03 crc kubenswrapper[4780]: I1205 06:51:03.110683 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:03 crc kubenswrapper[4780]: I1205 06:51:03.114337 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:03 crc kubenswrapper[4780]: I1205 06:51:03.115036 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:03 crc kubenswrapper[4780]: I1205 06:51:03.125296 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" podStartSLOduration=4.125280938 podStartE2EDuration="4.125280938s" podCreationTimestamp="2025-12-05 06:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:51:03.123850157 +0000 UTC m=+297.193366509" watchObservedRunningTime="2025-12-05 06:51:03.125280938 +0000 UTC m=+297.194797260" Dec 05 06:51:03 crc kubenswrapper[4780]: I1205 06:51:03.159711 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" podStartSLOduration=4.159692571 podStartE2EDuration="4.159692571s" podCreationTimestamp="2025-12-05 06:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:51:03.15900525 +0000 UTC m=+297.228521592" watchObservedRunningTime="2025-12-05 06:51:03.159692571 +0000 UTC m=+297.229208903" Dec 05 06:51:06 crc kubenswrapper[4780]: I1205 06:51:06.045571 4780 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.204402 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z5j52"] Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.205556 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.220840 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z5j52"] Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.385728 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.385807 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ad4064f-3aa6-46df-8f6e-79149a63e81a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.385838 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ad4064f-3aa6-46df-8f6e-79149a63e81a-bound-sa-token\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.385865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ad4064f-3aa6-46df-8f6e-79149a63e81a-registry-tls\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.386170 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ad4064f-3aa6-46df-8f6e-79149a63e81a-registry-certificates\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.386216 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ad4064f-3aa6-46df-8f6e-79149a63e81a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.386244 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4tzf\" (UniqueName: \"kubernetes.io/projected/4ad4064f-3aa6-46df-8f6e-79149a63e81a-kube-api-access-s4tzf\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.386289 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ad4064f-3aa6-46df-8f6e-79149a63e81a-trusted-ca\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.404733 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.487401 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ad4064f-3aa6-46df-8f6e-79149a63e81a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.487468 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4tzf\" (UniqueName: \"kubernetes.io/projected/4ad4064f-3aa6-46df-8f6e-79149a63e81a-kube-api-access-s4tzf\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.487523 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ad4064f-3aa6-46df-8f6e-79149a63e81a-trusted-ca\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.487564 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ad4064f-3aa6-46df-8f6e-79149a63e81a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.487599 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ad4064f-3aa6-46df-8f6e-79149a63e81a-bound-sa-token\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.487620 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ad4064f-3aa6-46df-8f6e-79149a63e81a-registry-tls\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.487664 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ad4064f-3aa6-46df-8f6e-79149a63e81a-registry-certificates\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.487992 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4ad4064f-3aa6-46df-8f6e-79149a63e81a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.489021 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ad4064f-3aa6-46df-8f6e-79149a63e81a-trusted-ca\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.489187 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4ad4064f-3aa6-46df-8f6e-79149a63e81a-registry-certificates\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.492931 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4ad4064f-3aa6-46df-8f6e-79149a63e81a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.493187 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4ad4064f-3aa6-46df-8f6e-79149a63e81a-registry-tls\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.505684 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ad4064f-3aa6-46df-8f6e-79149a63e81a-bound-sa-token\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.507139 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4tzf\" (UniqueName: \"kubernetes.io/projected/4ad4064f-3aa6-46df-8f6e-79149a63e81a-kube-api-access-s4tzf\") pod \"image-registry-66df7c8f76-z5j52\" (UID: \"4ad4064f-3aa6-46df-8f6e-79149a63e81a\") " pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.526039 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:18 crc kubenswrapper[4780]: I1205 06:51:18.923456 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-z5j52"] Dec 05 06:51:18 crc kubenswrapper[4780]: W1205 06:51:18.928262 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad4064f_3aa6_46df_8f6e_79149a63e81a.slice/crio-a554c673142e582b25eb02718b4777d2e44f3a57ccf4cfd8212d48048020fbb4 WatchSource:0}: Error finding container a554c673142e582b25eb02718b4777d2e44f3a57ccf4cfd8212d48048020fbb4: Status 404 returned error can't find the container with id a554c673142e582b25eb02718b4777d2e44f3a57ccf4cfd8212d48048020fbb4 Dec 05 06:51:19 crc kubenswrapper[4780]: I1205 06:51:19.194222 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" event={"ID":"4ad4064f-3aa6-46df-8f6e-79149a63e81a","Type":"ContainerStarted","Data":"f18571ccd2b238a5e564255b10f290ec5d30f2a2685d648c26c4f7919e0c1bff"} Dec 05 06:51:19 crc kubenswrapper[4780]: I1205 06:51:19.194573 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" event={"ID":"4ad4064f-3aa6-46df-8f6e-79149a63e81a","Type":"ContainerStarted","Data":"a554c673142e582b25eb02718b4777d2e44f3a57ccf4cfd8212d48048020fbb4"} Dec 05 06:51:19 crc kubenswrapper[4780]: I1205 06:51:19.194610 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:19 crc kubenswrapper[4780]: I1205 06:51:19.226667 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" podStartSLOduration=1.226646181 podStartE2EDuration="1.226646181s" podCreationTimestamp="2025-12-05 06:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:51:19.222846641 +0000 UTC m=+313.292362993" watchObservedRunningTime="2025-12-05 06:51:19.226646181 +0000 UTC m=+313.296162513" Dec 05 06:51:19 crc kubenswrapper[4780]: I1205 06:51:19.413924 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76b976544-wtqx6"] Dec 05 06:51:19 crc kubenswrapper[4780]: I1205 06:51:19.414117 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" podUID="89bd5e5b-70fa-49b4-b44b-df2d92be8b83" containerName="controller-manager" containerID="cri-o://2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b" gracePeriod=30 Dec 05 06:51:19 crc kubenswrapper[4780]: I1205 06:51:19.426972 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r"] Dec 05 06:51:19 crc kubenswrapper[4780]: I1205 06:51:19.427241 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" podUID="50afb3df-029c-4140-b767-b17c043114a1" containerName="route-controller-manager" containerID="cri-o://68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5" gracePeriod=30 Dec 05 06:51:19 crc kubenswrapper[4780]: I1205 06:51:19.875416 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:19 crc kubenswrapper[4780]: I1205 06:51:19.950870 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.010643 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-config\") pod \"50afb3df-029c-4140-b767-b17c043114a1\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.010988 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-client-ca\") pod \"50afb3df-029c-4140-b767-b17c043114a1\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.011017 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8fzf\" (UniqueName: \"kubernetes.io/projected/50afb3df-029c-4140-b767-b17c043114a1-kube-api-access-v8fzf\") pod \"50afb3df-029c-4140-b767-b17c043114a1\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.011059 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50afb3df-029c-4140-b767-b17c043114a1-serving-cert\") pod \"50afb3df-029c-4140-b767-b17c043114a1\" (UID: \"50afb3df-029c-4140-b767-b17c043114a1\") " Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.011666 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-config" (OuterVolumeSpecName: "config") pod "50afb3df-029c-4140-b767-b17c043114a1" (UID: "50afb3df-029c-4140-b767-b17c043114a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.011684 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "50afb3df-029c-4140-b767-b17c043114a1" (UID: "50afb3df-029c-4140-b767-b17c043114a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.016058 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50afb3df-029c-4140-b767-b17c043114a1-kube-api-access-v8fzf" (OuterVolumeSpecName: "kube-api-access-v8fzf") pod "50afb3df-029c-4140-b767-b17c043114a1" (UID: "50afb3df-029c-4140-b767-b17c043114a1"). InnerVolumeSpecName "kube-api-access-v8fzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.016904 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50afb3df-029c-4140-b767-b17c043114a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "50afb3df-029c-4140-b767-b17c043114a1" (UID: "50afb3df-029c-4140-b767-b17c043114a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.112731 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-client-ca\") pod \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.112983 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-serving-cert\") pod \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.113043 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-config\") pod \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.113078 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-proxy-ca-bundles\") pod \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.113181 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlmqz\" (UniqueName: \"kubernetes.io/projected/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-kube-api-access-rlmqz\") pod \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\" (UID: \"89bd5e5b-70fa-49b4-b44b-df2d92be8b83\") " Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.113552 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.113589 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8fzf\" (UniqueName: \"kubernetes.io/projected/50afb3df-029c-4140-b767-b17c043114a1-kube-api-access-v8fzf\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.113608 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50afb3df-029c-4140-b767-b17c043114a1-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.113625 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50afb3df-029c-4140-b767-b17c043114a1-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.113926 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-client-ca" (OuterVolumeSpecName: "client-ca") pod "89bd5e5b-70fa-49b4-b44b-df2d92be8b83" (UID: "89bd5e5b-70fa-49b4-b44b-df2d92be8b83"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.114364 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "89bd5e5b-70fa-49b4-b44b-df2d92be8b83" (UID: "89bd5e5b-70fa-49b4-b44b-df2d92be8b83"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.114478 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-config" (OuterVolumeSpecName: "config") pod "89bd5e5b-70fa-49b4-b44b-df2d92be8b83" (UID: "89bd5e5b-70fa-49b4-b44b-df2d92be8b83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.116270 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-kube-api-access-rlmqz" (OuterVolumeSpecName: "kube-api-access-rlmqz") pod "89bd5e5b-70fa-49b4-b44b-df2d92be8b83" (UID: "89bd5e5b-70fa-49b4-b44b-df2d92be8b83"). InnerVolumeSpecName "kube-api-access-rlmqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.117764 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "89bd5e5b-70fa-49b4-b44b-df2d92be8b83" (UID: "89bd5e5b-70fa-49b4-b44b-df2d92be8b83"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.199523 4780 generic.go:334] "Generic (PLEG): container finished" podID="89bd5e5b-70fa-49b4-b44b-df2d92be8b83" containerID="2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b" exitCode=0 Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.199584 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" event={"ID":"89bd5e5b-70fa-49b4-b44b-df2d92be8b83","Type":"ContainerDied","Data":"2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b"} Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.199604 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.199658 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76b976544-wtqx6" event={"ID":"89bd5e5b-70fa-49b4-b44b-df2d92be8b83","Type":"ContainerDied","Data":"2d5e2883f20bbf2dbb2fd769039db8c1c7b3c5c6dc57ca8e518705347af972d8"} Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.199677 4780 scope.go:117] "RemoveContainer" containerID="2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.204925 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" event={"ID":"50afb3df-029c-4140-b767-b17c043114a1","Type":"ContainerDied","Data":"68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5"} Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.204935 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.204818 4780 generic.go:334] "Generic (PLEG): container finished" podID="50afb3df-029c-4140-b767-b17c043114a1" containerID="68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5" exitCode=0 Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.205054 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r" event={"ID":"50afb3df-029c-4140-b767-b17c043114a1","Type":"ContainerDied","Data":"b6e1973b4b938121a8d3c59bf4124dc990bacc4d0bf76904e1edddcda3e3173b"} Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.214461 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlmqz\" (UniqueName: \"kubernetes.io/projected/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-kube-api-access-rlmqz\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.214513 4780 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.214536 4780 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.214554 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.214571 4780 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89bd5e5b-70fa-49b4-b44b-df2d92be8b83-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.222665 4780 scope.go:117] "RemoveContainer" containerID="2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b" Dec 05 06:51:20 crc kubenswrapper[4780]: E1205 06:51:20.223162 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b\": container with ID starting with 2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b not found: ID does not exist" containerID="2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.223229 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b"} err="failed to get container status \"2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b\": rpc error: code = NotFound desc = could not find container \"2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b\": container with ID starting with 2232fe228d7fd7544a0c8bfdca84c2732f1889c79c27b525edf5aff0ea2c9b8b not found: ID does not exist" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.223261 4780 scope.go:117] "RemoveContainer" containerID="68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.226446 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76b976544-wtqx6"] Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.234077 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76b976544-wtqx6"] Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.238819 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r"] Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.242609 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d8c54746-4q88r"] Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.242913 4780 scope.go:117] "RemoveContainer" containerID="68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5" Dec 05 06:51:20 crc kubenswrapper[4780]: E1205 06:51:20.243451 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5\": container with ID starting with 68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5 not found: ID does not exist" containerID="68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.243490 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5"} err="failed to get container status \"68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5\": rpc error: code = NotFound desc = could not find container \"68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5\": container with ID starting with 68a2fb2f8e2ca4b1b48b9b827358d61f2494324b6f408fc221ab765128d201f5 not found: ID does not exist" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.624754 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx"] Dec 05 06:51:20 crc kubenswrapper[4780]: E1205 06:51:20.625830 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50afb3df-029c-4140-b767-b17c043114a1" containerName="route-controller-manager" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.625866 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="50afb3df-029c-4140-b767-b17c043114a1" containerName="route-controller-manager" Dec 05 06:51:20 crc kubenswrapper[4780]: E1205 06:51:20.625934 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89bd5e5b-70fa-49b4-b44b-df2d92be8b83" containerName="controller-manager" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.625945 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="89bd5e5b-70fa-49b4-b44b-df2d92be8b83" containerName="controller-manager" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.626346 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="50afb3df-029c-4140-b767-b17c043114a1" containerName="route-controller-manager" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.626373 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="89bd5e5b-70fa-49b4-b44b-df2d92be8b83" containerName="controller-manager" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.632902 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.639409 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.639665 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.639715 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.639935 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.640160 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.641147 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.644167 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k"] Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.645027 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.651620 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.652248 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.652451 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.652613 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.652923 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.653483 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.658105 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx"] Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.661689 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k"] Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.666374 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.723464 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645227e3-1dd2-4941-9650-f403ea000d49-client-ca\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.723533 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twc8\" (UniqueName: \"kubernetes.io/projected/645227e3-1dd2-4941-9650-f403ea000d49-kube-api-access-5twc8\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.723686 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645227e3-1dd2-4941-9650-f403ea000d49-serving-cert\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.723841 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645227e3-1dd2-4941-9650-f403ea000d49-config\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.825067 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645227e3-1dd2-4941-9650-f403ea000d49-config\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.825142 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168310c6-0734-4e52-a0af-9d8d0229e292-config\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.825164 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/168310c6-0734-4e52-a0af-9d8d0229e292-client-ca\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.825185 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mblb8\" (UniqueName: \"kubernetes.io/projected/168310c6-0734-4e52-a0af-9d8d0229e292-kube-api-access-mblb8\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.825204 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645227e3-1dd2-4941-9650-f403ea000d49-client-ca\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.825221 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5twc8\" (UniqueName: \"kubernetes.io/projected/645227e3-1dd2-4941-9650-f403ea000d49-kube-api-access-5twc8\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.825249 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645227e3-1dd2-4941-9650-f403ea000d49-serving-cert\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.825267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/168310c6-0734-4e52-a0af-9d8d0229e292-serving-cert\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.825291 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/168310c6-0734-4e52-a0af-9d8d0229e292-proxy-ca-bundles\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.826399 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645227e3-1dd2-4941-9650-f403ea000d49-client-ca\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.826816 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645227e3-1dd2-4941-9650-f403ea000d49-config\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.829262 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645227e3-1dd2-4941-9650-f403ea000d49-serving-cert\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.848172 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5twc8\" (UniqueName: \"kubernetes.io/projected/645227e3-1dd2-4941-9650-f403ea000d49-kube-api-access-5twc8\") pod \"route-controller-manager-55bdb5ff86-v7qdx\" (UID: \"645227e3-1dd2-4941-9650-f403ea000d49\") " pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.927030 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/168310c6-0734-4e52-a0af-9d8d0229e292-client-ca\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.927092 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mblb8\" (UniqueName: \"kubernetes.io/projected/168310c6-0734-4e52-a0af-9d8d0229e292-kube-api-access-mblb8\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.927153 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/168310c6-0734-4e52-a0af-9d8d0229e292-serving-cert\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.927178 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/168310c6-0734-4e52-a0af-9d8d0229e292-proxy-ca-bundles\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.927248 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168310c6-0734-4e52-a0af-9d8d0229e292-config\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.927866 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/168310c6-0734-4e52-a0af-9d8d0229e292-client-ca\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.928624 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/168310c6-0734-4e52-a0af-9d8d0229e292-proxy-ca-bundles\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.930828 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168310c6-0734-4e52-a0af-9d8d0229e292-config\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.931386 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/168310c6-0734-4e52-a0af-9d8d0229e292-serving-cert\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.941955 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mblb8\" (UniqueName: \"kubernetes.io/projected/168310c6-0734-4e52-a0af-9d8d0229e292-kube-api-access-mblb8\") pod \"controller-manager-68ddbbc7c8-m4m7k\" (UID: \"168310c6-0734-4e52-a0af-9d8d0229e292\") " pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.949795 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:20 crc kubenswrapper[4780]: I1205 06:51:20.970784 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:21 crc kubenswrapper[4780]: I1205 06:51:21.393407 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k"] Dec 05 06:51:21 crc kubenswrapper[4780]: W1205 06:51:21.399514 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod168310c6_0734_4e52_a0af_9d8d0229e292.slice/crio-3ae26c2e6326567c05acddcc5f9a3691796b79cd42bbfc76aa643bbe424dd525 WatchSource:0}: Error finding container 3ae26c2e6326567c05acddcc5f9a3691796b79cd42bbfc76aa643bbe424dd525: Status 404 returned error can't find the container with id 3ae26c2e6326567c05acddcc5f9a3691796b79cd42bbfc76aa643bbe424dd525 Dec 05 06:51:21 crc kubenswrapper[4780]: I1205 06:51:21.437100 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx"] Dec 05 06:51:21 crc kubenswrapper[4780]: W1205 06:51:21.443929 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod645227e3_1dd2_4941_9650_f403ea000d49.slice/crio-20683ae8dd277669a10ccd5bd79aec771faae939b7bc68bcae30128ea531e505 WatchSource:0}: Error finding container 20683ae8dd277669a10ccd5bd79aec771faae939b7bc68bcae30128ea531e505: Status 404 returned error can't find the container with id 20683ae8dd277669a10ccd5bd79aec771faae939b7bc68bcae30128ea531e505 Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.144328 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50afb3df-029c-4140-b767-b17c043114a1" path="/var/lib/kubelet/pods/50afb3df-029c-4140-b767-b17c043114a1/volumes" Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.145270 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89bd5e5b-70fa-49b4-b44b-df2d92be8b83" path="/var/lib/kubelet/pods/89bd5e5b-70fa-49b4-b44b-df2d92be8b83/volumes" Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.220401 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" event={"ID":"645227e3-1dd2-4941-9650-f403ea000d49","Type":"ContainerStarted","Data":"c26a1f3b20356d178916075dc558217a5ef2e5159eaa245f112ec3dde5f28cc9"} Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.220442 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" event={"ID":"645227e3-1dd2-4941-9650-f403ea000d49","Type":"ContainerStarted","Data":"20683ae8dd277669a10ccd5bd79aec771faae939b7bc68bcae30128ea531e505"} Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.221548 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.223986 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" event={"ID":"168310c6-0734-4e52-a0af-9d8d0229e292","Type":"ContainerStarted","Data":"d908c921c78a35bab6990893420306ef61cf8827a5a336daeb164b6d7a51b10c"} Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.224024 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" event={"ID":"168310c6-0734-4e52-a0af-9d8d0229e292","Type":"ContainerStarted","Data":"3ae26c2e6326567c05acddcc5f9a3691796b79cd42bbfc76aa643bbe424dd525"} Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.224234 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.230036 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.230307 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.269865 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68ddbbc7c8-m4m7k" podStartSLOduration=3.26985014 podStartE2EDuration="3.26985014s" podCreationTimestamp="2025-12-05 06:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:51:22.268349766 +0000 UTC m=+316.337866098" watchObservedRunningTime="2025-12-05 06:51:22.26985014 +0000 UTC m=+316.339366472" Dec 05 06:51:22 crc kubenswrapper[4780]: I1205 06:51:22.271010 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55bdb5ff86-v7qdx" podStartSLOduration=3.271003932 podStartE2EDuration="3.271003932s" podCreationTimestamp="2025-12-05 06:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:51:22.246212268 +0000 UTC m=+316.315728600" watchObservedRunningTime="2025-12-05 06:51:22.271003932 +0000 UTC m=+316.340520264" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.304319 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhgg6"] Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.305304 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhgg6" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerName="registry-server" containerID="cri-o://18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d" gracePeriod=30 Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.309576 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljcn2"] Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.309912 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ljcn2" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerName="registry-server" containerID="cri-o://16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa" gracePeriod=30 Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.328410 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87cmc"] Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.328690 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" podUID="3b5ca6f7-6820-4010-966e-05e4cf49ba03" containerName="marketplace-operator" containerID="cri-o://ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35" gracePeriod=30 Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.334143 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgdk4"] Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.334360 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qgdk4" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerName="registry-server" containerID="cri-o://1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98" gracePeriod=30 Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.347859 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5b2tw"] Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.348110 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5b2tw" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerName="registry-server" containerID="cri-o://39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533" gracePeriod=30 Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.353841 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lptmc"] Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.354663 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.358710 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lptmc"] Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.518186 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a3732e56-d979-4da4-88e7-bf3e0aa77daf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lptmc\" (UID: \"a3732e56-d979-4da4-88e7-bf3e0aa77daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.518334 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3732e56-d979-4da4-88e7-bf3e0aa77daf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lptmc\" (UID: \"a3732e56-d979-4da4-88e7-bf3e0aa77daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.518366 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrnj8\" (UniqueName: \"kubernetes.io/projected/a3732e56-d979-4da4-88e7-bf3e0aa77daf-kube-api-access-mrnj8\") pod \"marketplace-operator-79b997595-lptmc\" (UID: \"a3732e56-d979-4da4-88e7-bf3e0aa77daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.619513 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3732e56-d979-4da4-88e7-bf3e0aa77daf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lptmc\" (UID: \"a3732e56-d979-4da4-88e7-bf3e0aa77daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.619567 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrnj8\" (UniqueName: \"kubernetes.io/projected/a3732e56-d979-4da4-88e7-bf3e0aa77daf-kube-api-access-mrnj8\") pod \"marketplace-operator-79b997595-lptmc\" (UID: \"a3732e56-d979-4da4-88e7-bf3e0aa77daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.619603 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a3732e56-d979-4da4-88e7-bf3e0aa77daf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lptmc\" (UID: \"a3732e56-d979-4da4-88e7-bf3e0aa77daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.621054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3732e56-d979-4da4-88e7-bf3e0aa77daf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lptmc\" (UID: \"a3732e56-d979-4da4-88e7-bf3e0aa77daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.636858 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a3732e56-d979-4da4-88e7-bf3e0aa77daf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lptmc\" (UID: \"a3732e56-d979-4da4-88e7-bf3e0aa77daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.649440 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrnj8\" (UniqueName: \"kubernetes.io/projected/a3732e56-d979-4da4-88e7-bf3e0aa77daf-kube-api-access-mrnj8\") pod \"marketplace-operator-79b997595-lptmc\" (UID: \"a3732e56-d979-4da4-88e7-bf3e0aa77daf\") " pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.672455 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.785090 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.923555 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-utilities\") pod \"623f84ec-99d6-44fc-8633-bf158d5b8dda\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.923600 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-catalog-content\") pod \"623f84ec-99d6-44fc-8633-bf158d5b8dda\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.923717 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cnck\" (UniqueName: \"kubernetes.io/projected/623f84ec-99d6-44fc-8633-bf158d5b8dda-kube-api-access-6cnck\") pod \"623f84ec-99d6-44fc-8633-bf158d5b8dda\" (UID: \"623f84ec-99d6-44fc-8633-bf158d5b8dda\") " Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.924832 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-utilities" (OuterVolumeSpecName: "utilities") pod "623f84ec-99d6-44fc-8633-bf158d5b8dda" (UID: "623f84ec-99d6-44fc-8633-bf158d5b8dda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.927543 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623f84ec-99d6-44fc-8633-bf158d5b8dda-kube-api-access-6cnck" (OuterVolumeSpecName: "kube-api-access-6cnck") pod "623f84ec-99d6-44fc-8633-bf158d5b8dda" (UID: "623f84ec-99d6-44fc-8633-bf158d5b8dda"). InnerVolumeSpecName "kube-api-access-6cnck". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.986181 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "623f84ec-99d6-44fc-8633-bf158d5b8dda" (UID: "623f84ec-99d6-44fc-8633-bf158d5b8dda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:51:35 crc kubenswrapper[4780]: I1205 06:51:35.991677 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.025518 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cnck\" (UniqueName: \"kubernetes.io/projected/623f84ec-99d6-44fc-8633-bf158d5b8dda-kube-api-access-6cnck\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.025561 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.025573 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623f84ec-99d6-44fc-8633-bf158d5b8dda-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.027788 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.038889 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.040502 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126395 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-utilities\") pod \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126436 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-catalog-content\") pod \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126473 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-trusted-ca\") pod \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126503 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-operator-metrics\") pod \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126524 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-utilities\") pod \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126557 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r5ls\" (UniqueName: \"kubernetes.io/projected/ad87a211-56cb-40ed-8d89-33f1900987d1-kube-api-access-6r5ls\") pod \"ad87a211-56cb-40ed-8d89-33f1900987d1\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126579 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqpjl\" (UniqueName: \"kubernetes.io/projected/7dccd32c-dbd1-45fb-8743-8ebd508423ad-kube-api-access-fqpjl\") pod \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\" (UID: \"7dccd32c-dbd1-45fb-8743-8ebd508423ad\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126640 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bftgg\" (UniqueName: \"kubernetes.io/projected/3b5ca6f7-6820-4010-966e-05e4cf49ba03-kube-api-access-bftgg\") pod \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\" (UID: \"3b5ca6f7-6820-4010-966e-05e4cf49ba03\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126683 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-catalog-content\") pod \"ad87a211-56cb-40ed-8d89-33f1900987d1\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126707 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j24qf\" (UniqueName: \"kubernetes.io/projected/a9b07063-6822-4f6a-ab0c-d6951daae0c3-kube-api-access-j24qf\") pod \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126728 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-catalog-content\") pod \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\" (UID: \"a9b07063-6822-4f6a-ab0c-d6951daae0c3\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.126746 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-utilities\") pod \"ad87a211-56cb-40ed-8d89-33f1900987d1\" (UID: \"ad87a211-56cb-40ed-8d89-33f1900987d1\") " Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.127165 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3b5ca6f7-6820-4010-966e-05e4cf49ba03" (UID: "3b5ca6f7-6820-4010-966e-05e4cf49ba03"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.127605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-utilities" (OuterVolumeSpecName: "utilities") pod "ad87a211-56cb-40ed-8d89-33f1900987d1" (UID: "ad87a211-56cb-40ed-8d89-33f1900987d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.128156 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-utilities" (OuterVolumeSpecName: "utilities") pod "a9b07063-6822-4f6a-ab0c-d6951daae0c3" (UID: "a9b07063-6822-4f6a-ab0c-d6951daae0c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.131524 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-utilities" (OuterVolumeSpecName: "utilities") pod "7dccd32c-dbd1-45fb-8743-8ebd508423ad" (UID: "7dccd32c-dbd1-45fb-8743-8ebd508423ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.131770 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b07063-6822-4f6a-ab0c-d6951daae0c3-kube-api-access-j24qf" (OuterVolumeSpecName: "kube-api-access-j24qf") pod "a9b07063-6822-4f6a-ab0c-d6951daae0c3" (UID: "a9b07063-6822-4f6a-ab0c-d6951daae0c3"). InnerVolumeSpecName "kube-api-access-j24qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.131986 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5ca6f7-6820-4010-966e-05e4cf49ba03-kube-api-access-bftgg" (OuterVolumeSpecName: "kube-api-access-bftgg") pod "3b5ca6f7-6820-4010-966e-05e4cf49ba03" (UID: "3b5ca6f7-6820-4010-966e-05e4cf49ba03"). InnerVolumeSpecName "kube-api-access-bftgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.131841 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad87a211-56cb-40ed-8d89-33f1900987d1-kube-api-access-6r5ls" (OuterVolumeSpecName: "kube-api-access-6r5ls") pod "ad87a211-56cb-40ed-8d89-33f1900987d1" (UID: "ad87a211-56cb-40ed-8d89-33f1900987d1"). InnerVolumeSpecName "kube-api-access-6r5ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.132040 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3b5ca6f7-6820-4010-966e-05e4cf49ba03" (UID: "3b5ca6f7-6820-4010-966e-05e4cf49ba03"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.136218 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dccd32c-dbd1-45fb-8743-8ebd508423ad-kube-api-access-fqpjl" (OuterVolumeSpecName: "kube-api-access-fqpjl") pod "7dccd32c-dbd1-45fb-8743-8ebd508423ad" (UID: "7dccd32c-dbd1-45fb-8743-8ebd508423ad"). InnerVolumeSpecName "kube-api-access-fqpjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.163183 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad87a211-56cb-40ed-8d89-33f1900987d1" (UID: "ad87a211-56cb-40ed-8d89-33f1900987d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.185431 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9b07063-6822-4f6a-ab0c-d6951daae0c3" (UID: "a9b07063-6822-4f6a-ab0c-d6951daae0c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.228407 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bftgg\" (UniqueName: \"kubernetes.io/projected/3b5ca6f7-6820-4010-966e-05e4cf49ba03-kube-api-access-bftgg\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.228693 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.228779 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j24qf\" (UniqueName: \"kubernetes.io/projected/a9b07063-6822-4f6a-ab0c-d6951daae0c3-kube-api-access-j24qf\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.228849 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.228942 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad87a211-56cb-40ed-8d89-33f1900987d1-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.229060 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9b07063-6822-4f6a-ab0c-d6951daae0c3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.229143 4780 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.229211 4780 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b5ca6f7-6820-4010-966e-05e4cf49ba03-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.229276 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.229335 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r5ls\" (UniqueName: \"kubernetes.io/projected/ad87a211-56cb-40ed-8d89-33f1900987d1-kube-api-access-6r5ls\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.229394 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqpjl\" (UniqueName: \"kubernetes.io/projected/7dccd32c-dbd1-45fb-8743-8ebd508423ad-kube-api-access-fqpjl\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.250299 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dccd32c-dbd1-45fb-8743-8ebd508423ad" (UID: "7dccd32c-dbd1-45fb-8743-8ebd508423ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.278199 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lptmc"] Dec 05 06:51:36 crc kubenswrapper[4780]: W1205 06:51:36.280704 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3732e56_d979_4da4_88e7_bf3e0aa77daf.slice/crio-5058850f85f68404953950125faa199997ffb17a3a3765d9abaf518c87f9adaf WatchSource:0}: Error finding container 5058850f85f68404953950125faa199997ffb17a3a3765d9abaf518c87f9adaf: Status 404 returned error can't find the container with id 5058850f85f68404953950125faa199997ffb17a3a3765d9abaf518c87f9adaf Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.304528 4780 generic.go:334] "Generic (PLEG): container finished" podID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerID="39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533" exitCode=0 Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.304670 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b2tw" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.304696 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2tw" event={"ID":"7dccd32c-dbd1-45fb-8743-8ebd508423ad","Type":"ContainerDied","Data":"39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.305109 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2tw" event={"ID":"7dccd32c-dbd1-45fb-8743-8ebd508423ad","Type":"ContainerDied","Data":"d2d980184575a82e722f2ac8189d6cda1da2ff5cf72098b53d2b6e492c296feb"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.305157 4780 scope.go:117] "RemoveContainer" containerID="39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.311013 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerID="1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98" exitCode=0 Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.311079 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgdk4" event={"ID":"ad87a211-56cb-40ed-8d89-33f1900987d1","Type":"ContainerDied","Data":"1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.311105 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgdk4" event={"ID":"ad87a211-56cb-40ed-8d89-33f1900987d1","Type":"ContainerDied","Data":"885c17bb26f0ad4b3328e1c3da93fa7c9e403fe8687c2fdfdd20865616e683d2"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.311637 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgdk4" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.313834 4780 generic.go:334] "Generic (PLEG): container finished" podID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerID="18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d" exitCode=0 Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.313891 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhgg6" event={"ID":"a9b07063-6822-4f6a-ab0c-d6951daae0c3","Type":"ContainerDied","Data":"18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.313912 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhgg6" event={"ID":"a9b07063-6822-4f6a-ab0c-d6951daae0c3","Type":"ContainerDied","Data":"82e937dca8e508691b2fab96dfbcf89318cf4c8dabf46282fb7229c1a8956d66"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.314037 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhgg6" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.317155 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" event={"ID":"a3732e56-d979-4da4-88e7-bf3e0aa77daf","Type":"ContainerStarted","Data":"5058850f85f68404953950125faa199997ffb17a3a3765d9abaf518c87f9adaf"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.322956 4780 generic.go:334] "Generic (PLEG): container finished" podID="3b5ca6f7-6820-4010-966e-05e4cf49ba03" containerID="ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35" exitCode=0 Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.323018 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.323033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" event={"ID":"3b5ca6f7-6820-4010-966e-05e4cf49ba03","Type":"ContainerDied","Data":"ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.324617 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-87cmc" event={"ID":"3b5ca6f7-6820-4010-966e-05e4cf49ba03","Type":"ContainerDied","Data":"589ab147a7ff2f73690502ed232c60292eb8adbad79b9ab1ce30dfc1bd424236"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.326563 4780 scope.go:117] "RemoveContainer" containerID="a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.329534 4780 generic.go:334] "Generic (PLEG): container finished" podID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerID="16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa" exitCode=0 Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.329594 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljcn2" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.329615 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljcn2" event={"ID":"623f84ec-99d6-44fc-8633-bf158d5b8dda","Type":"ContainerDied","Data":"16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.329935 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljcn2" event={"ID":"623f84ec-99d6-44fc-8633-bf158d5b8dda","Type":"ContainerDied","Data":"579dbcdcdf6d8473736d8aabe28921b177a0b91d547c2eadd15cc34bbacf5539"} Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.330449 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dccd32c-dbd1-45fb-8743-8ebd508423ad-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.351033 4780 scope.go:117] "RemoveContainer" containerID="b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.365728 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhgg6"] Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.373161 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhgg6"] Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.378057 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5b2tw"] Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.380964 4780 scope.go:117] "RemoveContainer" containerID="39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.384394 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533\": container with ID starting with 39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533 not found: ID does not exist" containerID="39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.384441 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533"} err="failed to get container status \"39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533\": rpc error: code = NotFound desc = could not find container \"39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533\": container with ID starting with 39f98812bd8aa19b3c7a06b5a9add74b702c77fecb29da0e1e3323f1b7da8533 not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.384477 4780 scope.go:117] "RemoveContainer" containerID="a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.384848 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4\": container with ID starting with a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4 not found: ID does not exist" containerID="a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.385849 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4"} err="failed to get container status \"a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4\": rpc error: code = NotFound desc = could not find container \"a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4\": container with ID starting with a917ba7aea5573d364c3eb9343538e3b89d4278ac217c5d147c4cf463feaeda4 not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.385929 4780 scope.go:117] "RemoveContainer" containerID="b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.386287 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73\": container with ID starting with b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73 not found: ID does not exist" containerID="b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.386314 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73"} err="failed to get container status \"b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73\": rpc error: code = NotFound desc = could not find container \"b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73\": container with ID starting with b897878526d756bf8ee5e9d9d9087806e1be67ee25ed068081deb93c7e597f73 not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.386331 4780 scope.go:117] "RemoveContainer" containerID="1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.389197 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5b2tw"] Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.392737 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljcn2"] Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.396736 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ljcn2"] Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.402839 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87cmc"] Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.411147 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-87cmc"] Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.412107 4780 scope.go:117] "RemoveContainer" containerID="951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.414086 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgdk4"] Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.417274 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgdk4"] Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.426468 4780 scope.go:117] "RemoveContainer" containerID="4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.439293 4780 scope.go:117] "RemoveContainer" containerID="1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.439649 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98\": container with ID starting with 1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98 not found: ID does not exist" containerID="1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.439743 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98"} err="failed to get container status \"1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98\": rpc error: code = NotFound desc = could not find container \"1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98\": container with ID starting with 1f334dc21fe659b5f332fe3ac1926a0bd1e3dc2b570b83acfb08a927c2090d98 not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.439865 4780 scope.go:117] "RemoveContainer" containerID="951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.440263 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea\": container with ID starting with 951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea not found: ID does not exist" containerID="951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.440303 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea"} err="failed to get container status \"951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea\": rpc error: code = NotFound desc = could not find container \"951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea\": container with ID starting with 951ccb1e0f8eff4d5502fd2dd5ae57da48c22964aa715f1231ab04fb4780dfea not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.440335 4780 scope.go:117] "RemoveContainer" containerID="4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.440693 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff\": container with ID starting with 4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff not found: ID does not exist" containerID="4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.440791 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff"} err="failed to get container status \"4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff\": rpc error: code = NotFound desc = could not find container \"4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff\": container with ID starting with 4cee218b8efcfcfd76652920cb34de361e229601dca93c041aa3a403f3949cff not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.440899 4780 scope.go:117] "RemoveContainer" containerID="18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.488704 4780 scope.go:117] "RemoveContainer" containerID="45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.501853 4780 scope.go:117] "RemoveContainer" containerID="1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.518268 4780 scope.go:117] "RemoveContainer" containerID="18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.518612 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d\": container with ID starting with 18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d not found: ID does not exist" containerID="18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.518642 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d"} err="failed to get container status \"18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d\": rpc error: code = NotFound desc = could not find container \"18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d\": container with ID starting with 18f9ee3a531c3bbdc66214ca77616cae340e12c3ee36c8ab24cbb5d00462ac7d not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.518663 4780 scope.go:117] "RemoveContainer" containerID="45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.518996 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4\": container with ID starting with 45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4 not found: ID does not exist" containerID="45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.519018 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4"} err="failed to get container status \"45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4\": rpc error: code = NotFound desc = could not find container \"45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4\": container with ID starting with 45e0c50b6b5d8a44b2788515569573cbf83cab12448b810fe6185ec989fd27e4 not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.519031 4780 scope.go:117] "RemoveContainer" containerID="1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.519372 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e\": container with ID starting with 1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e not found: ID does not exist" containerID="1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.519430 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e"} err="failed to get container status \"1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e\": rpc error: code = NotFound desc = could not find container \"1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e\": container with ID starting with 1243e8e9c2b3e2edccadbb9a724d3b31a57d07be61c63ba03eafac18f0c0ae3e not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.519469 4780 scope.go:117] "RemoveContainer" containerID="ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.535641 4780 scope.go:117] "RemoveContainer" containerID="ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.536184 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35\": container with ID starting with ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35 not found: ID does not exist" containerID="ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.536228 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35"} err="failed to get container status \"ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35\": rpc error: code = NotFound desc = could not find container \"ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35\": container with ID starting with ca8b485a8984a6cad00413c428cf7782bec9d4376210eb448a495632d5dfcf35 not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.536260 4780 scope.go:117] "RemoveContainer" containerID="16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.548784 4780 scope.go:117] "RemoveContainer" containerID="9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.561216 4780 scope.go:117] "RemoveContainer" containerID="aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.576783 4780 scope.go:117] "RemoveContainer" containerID="16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.577279 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa\": container with ID starting with 16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa not found: ID does not exist" containerID="16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.577305 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa"} err="failed to get container status \"16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa\": rpc error: code = NotFound desc = could not find container \"16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa\": container with ID starting with 16f14bf9ef03fb87641e93c9f5e202cdaa3fd97c098cb6f8e9c12ab16361b2aa not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.577325 4780 scope.go:117] "RemoveContainer" containerID="9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.577584 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b\": container with ID starting with 9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b not found: ID does not exist" containerID="9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.577602 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b"} err="failed to get container status \"9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b\": rpc error: code = NotFound desc = could not find container \"9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b\": container with ID starting with 9919171bc14d5e99ab4c11c9ea20b23c91d26c932ba4f7835f67a2658af41a0b not found: ID does not exist" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.577615 4780 scope.go:117] "RemoveContainer" containerID="aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0" Dec 05 06:51:36 crc kubenswrapper[4780]: E1205 06:51:36.577823 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0\": container with ID starting with aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0 not found: ID does not exist" containerID="aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0" Dec 05 06:51:36 crc kubenswrapper[4780]: I1205 06:51:36.577924 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0"} err="failed to get container status \"aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0\": rpc error: code = NotFound desc = could not find container \"aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0\": container with ID starting with aa7c895e019ac5bfb68bf4ca62b2f56e2d7ff96108504a91081c1f6002dc84c0 not found: ID does not exist" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.350190 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" event={"ID":"a3732e56-d979-4da4-88e7-bf3e0aa77daf","Type":"ContainerStarted","Data":"142783af3ef157bb6735dd2755f6baae43b3f65657c16a2b2e79667b67a2f1bf"} Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.350689 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.353314 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.402853 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lptmc" podStartSLOduration=2.402834458 podStartE2EDuration="2.402834458s" podCreationTimestamp="2025-12-05 06:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:51:37.376147302 +0000 UTC m=+331.445663654" watchObservedRunningTime="2025-12-05 06:51:37.402834458 +0000 UTC m=+331.472350790" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.513698 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hztnh"] Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.513898 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerName="extract-utilities" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.513913 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerName="extract-utilities" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.513942 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.513950 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.513963 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5ca6f7-6820-4010-966e-05e4cf49ba03" containerName="marketplace-operator" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.513971 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5ca6f7-6820-4010-966e-05e4cf49ba03" containerName="marketplace-operator" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.513981 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerName="extract-utilities" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.513988 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerName="extract-utilities" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.513999 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerName="extract-content" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514006 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerName="extract-content" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.514019 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerName="extract-content" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514028 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerName="extract-content" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.514036 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerName="extract-content" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514044 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerName="extract-content" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.514055 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514061 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.514072 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerName="extract-utilities" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514080 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerName="extract-utilities" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.514088 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerName="extract-utilities" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514095 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerName="extract-utilities" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.514104 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514110 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.514122 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514130 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: E1205 06:51:37.514140 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerName="extract-content" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514148 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerName="extract-content" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514249 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514269 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514280 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514289 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" containerName="registry-server" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.514300 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5ca6f7-6820-4010-966e-05e4cf49ba03" containerName="marketplace-operator" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.515091 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.519200 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.523000 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hztnh"] Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.647929 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8973937a-3238-47fa-b653-5f2e1cf63d9c-utilities\") pod \"redhat-marketplace-hztnh\" (UID: \"8973937a-3238-47fa-b653-5f2e1cf63d9c\") " pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.648011 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626d8\" (UniqueName: \"kubernetes.io/projected/8973937a-3238-47fa-b653-5f2e1cf63d9c-kube-api-access-626d8\") pod \"redhat-marketplace-hztnh\" (UID: \"8973937a-3238-47fa-b653-5f2e1cf63d9c\") " pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.648106 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8973937a-3238-47fa-b653-5f2e1cf63d9c-catalog-content\") pod \"redhat-marketplace-hztnh\" (UID: \"8973937a-3238-47fa-b653-5f2e1cf63d9c\") " pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.712890 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hhspt"] Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.713840 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.715592 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.726866 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hhspt"] Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.749452 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8973937a-3238-47fa-b653-5f2e1cf63d9c-utilities\") pod \"redhat-marketplace-hztnh\" (UID: \"8973937a-3238-47fa-b653-5f2e1cf63d9c\") " pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.749515 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626d8\" (UniqueName: \"kubernetes.io/projected/8973937a-3238-47fa-b653-5f2e1cf63d9c-kube-api-access-626d8\") pod \"redhat-marketplace-hztnh\" (UID: \"8973937a-3238-47fa-b653-5f2e1cf63d9c\") " pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.749563 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8973937a-3238-47fa-b653-5f2e1cf63d9c-catalog-content\") pod \"redhat-marketplace-hztnh\" (UID: \"8973937a-3238-47fa-b653-5f2e1cf63d9c\") " pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.750210 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8973937a-3238-47fa-b653-5f2e1cf63d9c-catalog-content\") pod \"redhat-marketplace-hztnh\" (UID: \"8973937a-3238-47fa-b653-5f2e1cf63d9c\") " pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.760722 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8973937a-3238-47fa-b653-5f2e1cf63d9c-utilities\") pod \"redhat-marketplace-hztnh\" (UID: \"8973937a-3238-47fa-b653-5f2e1cf63d9c\") " pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.776175 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626d8\" (UniqueName: \"kubernetes.io/projected/8973937a-3238-47fa-b653-5f2e1cf63d9c-kube-api-access-626d8\") pod \"redhat-marketplace-hztnh\" (UID: \"8973937a-3238-47fa-b653-5f2e1cf63d9c\") " pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.834580 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.850448 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-utilities\") pod \"certified-operators-hhspt\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.850815 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8kv7\" (UniqueName: \"kubernetes.io/projected/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-kube-api-access-v8kv7\") pod \"certified-operators-hhspt\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.850850 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-catalog-content\") pod \"certified-operators-hhspt\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.951968 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8kv7\" (UniqueName: \"kubernetes.io/projected/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-kube-api-access-v8kv7\") pod \"certified-operators-hhspt\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.952026 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-catalog-content\") pod \"certified-operators-hhspt\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.952061 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-utilities\") pod \"certified-operators-hhspt\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.952528 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-utilities\") pod \"certified-operators-hhspt\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.953020 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-catalog-content\") pod \"certified-operators-hhspt\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:37 crc kubenswrapper[4780]: I1205 06:51:37.975230 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8kv7\" (UniqueName: \"kubernetes.io/projected/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-kube-api-access-v8kv7\") pod \"certified-operators-hhspt\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.031508 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.152087 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5ca6f7-6820-4010-966e-05e4cf49ba03" path="/var/lib/kubelet/pods/3b5ca6f7-6820-4010-966e-05e4cf49ba03/volumes" Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.152609 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623f84ec-99d6-44fc-8633-bf158d5b8dda" path="/var/lib/kubelet/pods/623f84ec-99d6-44fc-8633-bf158d5b8dda/volumes" Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.153165 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dccd32c-dbd1-45fb-8743-8ebd508423ad" path="/var/lib/kubelet/pods/7dccd32c-dbd1-45fb-8743-8ebd508423ad/volumes" Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.154129 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b07063-6822-4f6a-ab0c-d6951daae0c3" path="/var/lib/kubelet/pods/a9b07063-6822-4f6a-ab0c-d6951daae0c3/volumes" Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.154659 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad87a211-56cb-40ed-8d89-33f1900987d1" path="/var/lib/kubelet/pods/ad87a211-56cb-40ed-8d89-33f1900987d1/volumes" Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.225217 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hztnh"] Dec 05 06:51:38 crc kubenswrapper[4780]: W1205 06:51:38.235159 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8973937a_3238_47fa_b653_5f2e1cf63d9c.slice/crio-8985075a9bab5e60dead6149e1c1e24e404053ed5025ab7cd6c308bb0fca845f WatchSource:0}: Error finding container 8985075a9bab5e60dead6149e1c1e24e404053ed5025ab7cd6c308bb0fca845f: Status 404 returned error can't find the container with id 8985075a9bab5e60dead6149e1c1e24e404053ed5025ab7cd6c308bb0fca845f Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.358772 4780 generic.go:334] "Generic (PLEG): container finished" podID="8973937a-3238-47fa-b653-5f2e1cf63d9c" containerID="38a57941992de7c59b07032dd981e19ffe13d8bd0349ded99958df6bf46ad0f9" exitCode=0 Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.358832 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztnh" event={"ID":"8973937a-3238-47fa-b653-5f2e1cf63d9c","Type":"ContainerDied","Data":"38a57941992de7c59b07032dd981e19ffe13d8bd0349ded99958df6bf46ad0f9"} Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.358903 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztnh" event={"ID":"8973937a-3238-47fa-b653-5f2e1cf63d9c","Type":"ContainerStarted","Data":"8985075a9bab5e60dead6149e1c1e24e404053ed5025ab7cd6c308bb0fca845f"} Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.396256 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hhspt"] Dec 05 06:51:38 crc kubenswrapper[4780]: W1205 06:51:38.411250 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9e0dac3_e166_4bd6_88c1_af5d7ffe7f8c.slice/crio-5c1c7a8ff466f7e8237b5cd96b0cec035fe4c08282bb9d14bfd3f4b005b28550 WatchSource:0}: Error finding container 5c1c7a8ff466f7e8237b5cd96b0cec035fe4c08282bb9d14bfd3f4b005b28550: Status 404 returned error can't find the container with id 5c1c7a8ff466f7e8237b5cd96b0cec035fe4c08282bb9d14bfd3f4b005b28550 Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.530984 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-z5j52" Dec 05 06:51:38 crc kubenswrapper[4780]: I1205 06:51:38.611623 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hcl4v"] Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.366259 4780 generic.go:334] "Generic (PLEG): container finished" podID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerID="55e3b6ed73686189e267b8b14e661a753be25aaded732c7d28721ac888afe4f1" exitCode=0 Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.366502 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhspt" event={"ID":"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c","Type":"ContainerDied","Data":"55e3b6ed73686189e267b8b14e661a753be25aaded732c7d28721ac888afe4f1"} Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.366618 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhspt" event={"ID":"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c","Type":"ContainerStarted","Data":"5c1c7a8ff466f7e8237b5cd96b0cec035fe4c08282bb9d14bfd3f4b005b28550"} Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.367912 4780 generic.go:334] "Generic (PLEG): container finished" podID="8973937a-3238-47fa-b653-5f2e1cf63d9c" containerID="0f1989f67e54dae014d9e97a3e197c3608da48d5b79b2eb58214f41d42570acf" exitCode=0 Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.368151 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztnh" event={"ID":"8973937a-3238-47fa-b653-5f2e1cf63d9c","Type":"ContainerDied","Data":"0f1989f67e54dae014d9e97a3e197c3608da48d5b79b2eb58214f41d42570acf"} Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.911415 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w9j9w"] Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.912940 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.914643 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.927872 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w9j9w"] Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.977118 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqpfw\" (UniqueName: \"kubernetes.io/projected/4dce830d-a940-4f68-95fa-922479207512-kube-api-access-vqpfw\") pod \"redhat-operators-w9j9w\" (UID: \"4dce830d-a940-4f68-95fa-922479207512\") " pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.977159 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dce830d-a940-4f68-95fa-922479207512-utilities\") pod \"redhat-operators-w9j9w\" (UID: \"4dce830d-a940-4f68-95fa-922479207512\") " pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:39 crc kubenswrapper[4780]: I1205 06:51:39.977196 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dce830d-a940-4f68-95fa-922479207512-catalog-content\") pod \"redhat-operators-w9j9w\" (UID: \"4dce830d-a940-4f68-95fa-922479207512\") " pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.078077 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqpfw\" (UniqueName: \"kubernetes.io/projected/4dce830d-a940-4f68-95fa-922479207512-kube-api-access-vqpfw\") pod \"redhat-operators-w9j9w\" (UID: \"4dce830d-a940-4f68-95fa-922479207512\") " pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.078427 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dce830d-a940-4f68-95fa-922479207512-utilities\") pod \"redhat-operators-w9j9w\" (UID: \"4dce830d-a940-4f68-95fa-922479207512\") " pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.078484 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dce830d-a940-4f68-95fa-922479207512-catalog-content\") pod \"redhat-operators-w9j9w\" (UID: \"4dce830d-a940-4f68-95fa-922479207512\") " pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.078920 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dce830d-a940-4f68-95fa-922479207512-utilities\") pod \"redhat-operators-w9j9w\" (UID: \"4dce830d-a940-4f68-95fa-922479207512\") " pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.078966 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dce830d-a940-4f68-95fa-922479207512-catalog-content\") pod \"redhat-operators-w9j9w\" (UID: \"4dce830d-a940-4f68-95fa-922479207512\") " pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.102072 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqpfw\" (UniqueName: \"kubernetes.io/projected/4dce830d-a940-4f68-95fa-922479207512-kube-api-access-vqpfw\") pod \"redhat-operators-w9j9w\" (UID: \"4dce830d-a940-4f68-95fa-922479207512\") " pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.114488 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5hp5n"] Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.115719 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.118544 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.129579 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5hp5n"] Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.179401 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flvpt\" (UniqueName: \"kubernetes.io/projected/20376915-18d2-4c02-bfdd-eede7902927c-kube-api-access-flvpt\") pod \"community-operators-5hp5n\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.179463 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-utilities\") pod \"community-operators-5hp5n\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.179629 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-catalog-content\") pod \"community-operators-5hp5n\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.227613 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.280936 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flvpt\" (UniqueName: \"kubernetes.io/projected/20376915-18d2-4c02-bfdd-eede7902927c-kube-api-access-flvpt\") pod \"community-operators-5hp5n\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.280996 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-utilities\") pod \"community-operators-5hp5n\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.281037 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-catalog-content\") pod \"community-operators-5hp5n\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.281498 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-catalog-content\") pod \"community-operators-5hp5n\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.281983 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-utilities\") pod \"community-operators-5hp5n\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.297298 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flvpt\" (UniqueName: \"kubernetes.io/projected/20376915-18d2-4c02-bfdd-eede7902927c-kube-api-access-flvpt\") pod \"community-operators-5hp5n\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.375009 4780 generic.go:334] "Generic (PLEG): container finished" podID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerID="35546a18a6283e9d7b971f3f39c628198a51df9404c5f48a6c7b1780cbee965e" exitCode=0 Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.375098 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhspt" event={"ID":"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c","Type":"ContainerDied","Data":"35546a18a6283e9d7b971f3f39c628198a51df9404c5f48a6c7b1780cbee965e"} Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.380905 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztnh" event={"ID":"8973937a-3238-47fa-b653-5f2e1cf63d9c","Type":"ContainerStarted","Data":"a8506192352224cb16bd60fe363e3c8b74d57288d826b4a7b8a56c82324051f4"} Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.408727 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hztnh" podStartSLOduration=2.022358585 podStartE2EDuration="3.408711351s" podCreationTimestamp="2025-12-05 06:51:37 +0000 UTC" firstStartedPulling="2025-12-05 06:51:38.360137564 +0000 UTC m=+332.429653896" lastFinishedPulling="2025-12-05 06:51:39.74649033 +0000 UTC m=+333.816006662" observedRunningTime="2025-12-05 06:51:40.404577294 +0000 UTC m=+334.474093626" watchObservedRunningTime="2025-12-05 06:51:40.408711351 +0000 UTC m=+334.478227673" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.450560 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.643945 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w9j9w"] Dec 05 06:51:40 crc kubenswrapper[4780]: W1205 06:51:40.645812 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dce830d_a940_4f68_95fa_922479207512.slice/crio-8684529d728412ec1267bb04e1b0e14eae07aa07853d4519bc7ec2bbed9b5bc5 WatchSource:0}: Error finding container 8684529d728412ec1267bb04e1b0e14eae07aa07853d4519bc7ec2bbed9b5bc5: Status 404 returned error can't find the container with id 8684529d728412ec1267bb04e1b0e14eae07aa07853d4519bc7ec2bbed9b5bc5 Dec 05 06:51:40 crc kubenswrapper[4780]: I1205 06:51:40.833091 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5hp5n"] Dec 05 06:51:40 crc kubenswrapper[4780]: W1205 06:51:40.864311 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20376915_18d2_4c02_bfdd_eede7902927c.slice/crio-0f7af5806a518a47cec9eba7e6aa4552445c81e13e066fc0ab756f49a577ebf7 WatchSource:0}: Error finding container 0f7af5806a518a47cec9eba7e6aa4552445c81e13e066fc0ab756f49a577ebf7: Status 404 returned error can't find the container with id 0f7af5806a518a47cec9eba7e6aa4552445c81e13e066fc0ab756f49a577ebf7 Dec 05 06:51:41 crc kubenswrapper[4780]: I1205 06:51:41.387789 4780 generic.go:334] "Generic (PLEG): container finished" podID="4dce830d-a940-4f68-95fa-922479207512" containerID="3e98a6adaa07d6cf044398b0125cc1764451911bae02e0e4af09437139975a84" exitCode=0 Dec 05 06:51:41 crc kubenswrapper[4780]: I1205 06:51:41.388180 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9j9w" event={"ID":"4dce830d-a940-4f68-95fa-922479207512","Type":"ContainerDied","Data":"3e98a6adaa07d6cf044398b0125cc1764451911bae02e0e4af09437139975a84"} Dec 05 06:51:41 crc kubenswrapper[4780]: I1205 06:51:41.388211 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9j9w" event={"ID":"4dce830d-a940-4f68-95fa-922479207512","Type":"ContainerStarted","Data":"8684529d728412ec1267bb04e1b0e14eae07aa07853d4519bc7ec2bbed9b5bc5"} Dec 05 06:51:41 crc kubenswrapper[4780]: I1205 06:51:41.393740 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhspt" event={"ID":"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c","Type":"ContainerStarted","Data":"dcc2d6afaa59afd24286d4206eb3b22406caed42971ee36a35927f9226af4abd"} Dec 05 06:51:41 crc kubenswrapper[4780]: I1205 06:51:41.395974 4780 generic.go:334] "Generic (PLEG): container finished" podID="20376915-18d2-4c02-bfdd-eede7902927c" containerID="d7e747f8e0fa6e7e02b0852fa78237bf71168923d01fbc885d009f45c4f4638f" exitCode=0 Dec 05 06:51:41 crc kubenswrapper[4780]: I1205 06:51:41.396022 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp5n" event={"ID":"20376915-18d2-4c02-bfdd-eede7902927c","Type":"ContainerDied","Data":"d7e747f8e0fa6e7e02b0852fa78237bf71168923d01fbc885d009f45c4f4638f"} Dec 05 06:51:41 crc kubenswrapper[4780]: I1205 06:51:41.396053 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp5n" event={"ID":"20376915-18d2-4c02-bfdd-eede7902927c","Type":"ContainerStarted","Data":"0f7af5806a518a47cec9eba7e6aa4552445c81e13e066fc0ab756f49a577ebf7"} Dec 05 06:51:41 crc kubenswrapper[4780]: I1205 06:51:41.438015 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hhspt" podStartSLOduration=2.913608812 podStartE2EDuration="4.437997726s" podCreationTimestamp="2025-12-05 06:51:37 +0000 UTC" firstStartedPulling="2025-12-05 06:51:39.369118364 +0000 UTC m=+333.438634696" lastFinishedPulling="2025-12-05 06:51:40.893507278 +0000 UTC m=+334.963023610" observedRunningTime="2025-12-05 06:51:41.435657359 +0000 UTC m=+335.505173691" watchObservedRunningTime="2025-12-05 06:51:41.437997726 +0000 UTC m=+335.507514058" Dec 05 06:51:42 crc kubenswrapper[4780]: I1205 06:51:42.400981 4780 generic.go:334] "Generic (PLEG): container finished" podID="20376915-18d2-4c02-bfdd-eede7902927c" containerID="ba3d89dbcf600d1ba13f6667188da3de22dc3bd262e546fa114d7ee57b5b6ebe" exitCode=0 Dec 05 06:51:42 crc kubenswrapper[4780]: I1205 06:51:42.401069 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp5n" event={"ID":"20376915-18d2-4c02-bfdd-eede7902927c","Type":"ContainerDied","Data":"ba3d89dbcf600d1ba13f6667188da3de22dc3bd262e546fa114d7ee57b5b6ebe"} Dec 05 06:51:42 crc kubenswrapper[4780]: I1205 06:51:42.403287 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9j9w" event={"ID":"4dce830d-a940-4f68-95fa-922479207512","Type":"ContainerStarted","Data":"0073f74845b3ce86332b5550194ace46b925c878fa5d07275c206eeb5a38deb4"} Dec 05 06:51:43 crc kubenswrapper[4780]: I1205 06:51:43.413816 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp5n" event={"ID":"20376915-18d2-4c02-bfdd-eede7902927c","Type":"ContainerStarted","Data":"6e29c2b9fac1ed7c781190d522486cd9067231353c9f62fd652508a251250a37"} Dec 05 06:51:43 crc kubenswrapper[4780]: I1205 06:51:43.415868 4780 generic.go:334] "Generic (PLEG): container finished" podID="4dce830d-a940-4f68-95fa-922479207512" containerID="0073f74845b3ce86332b5550194ace46b925c878fa5d07275c206eeb5a38deb4" exitCode=0 Dec 05 06:51:43 crc kubenswrapper[4780]: I1205 06:51:43.415926 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9j9w" event={"ID":"4dce830d-a940-4f68-95fa-922479207512","Type":"ContainerDied","Data":"0073f74845b3ce86332b5550194ace46b925c878fa5d07275c206eeb5a38deb4"} Dec 05 06:51:43 crc kubenswrapper[4780]: I1205 06:51:43.431626 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5hp5n" podStartSLOduration=2.013497912 podStartE2EDuration="3.431609887s" podCreationTimestamp="2025-12-05 06:51:40 +0000 UTC" firstStartedPulling="2025-12-05 06:51:41.397140388 +0000 UTC m=+335.466656720" lastFinishedPulling="2025-12-05 06:51:42.815252363 +0000 UTC m=+336.884768695" observedRunningTime="2025-12-05 06:51:43.4310316 +0000 UTC m=+337.500547932" watchObservedRunningTime="2025-12-05 06:51:43.431609887 +0000 UTC m=+337.501126219" Dec 05 06:51:45 crc kubenswrapper[4780]: I1205 06:51:45.434522 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9j9w" event={"ID":"4dce830d-a940-4f68-95fa-922479207512","Type":"ContainerStarted","Data":"971e960564a3f71146288710dde0b2d525bb855c05227570e93b0631737852cd"} Dec 05 06:51:45 crc kubenswrapper[4780]: I1205 06:51:45.450022 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w9j9w" podStartSLOduration=3.988836158 podStartE2EDuration="6.450001598s" podCreationTimestamp="2025-12-05 06:51:39 +0000 UTC" firstStartedPulling="2025-12-05 06:51:41.389895263 +0000 UTC m=+335.459411605" lastFinishedPulling="2025-12-05 06:51:43.851060713 +0000 UTC m=+337.920577045" observedRunningTime="2025-12-05 06:51:45.449410932 +0000 UTC m=+339.518927264" watchObservedRunningTime="2025-12-05 06:51:45.450001598 +0000 UTC m=+339.519517930" Dec 05 06:51:47 crc kubenswrapper[4780]: I1205 06:51:47.842763 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:47 crc kubenswrapper[4780]: I1205 06:51:47.843139 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:47 crc kubenswrapper[4780]: I1205 06:51:47.880137 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:48 crc kubenswrapper[4780]: I1205 06:51:48.032224 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:48 crc kubenswrapper[4780]: I1205 06:51:48.032284 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:48 crc kubenswrapper[4780]: I1205 06:51:48.071080 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:48 crc kubenswrapper[4780]: I1205 06:51:48.482356 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hztnh" Dec 05 06:51:48 crc kubenswrapper[4780]: I1205 06:51:48.486142 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hhspt" Dec 05 06:51:50 crc kubenswrapper[4780]: I1205 06:51:50.229072 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:50 crc kubenswrapper[4780]: I1205 06:51:50.230483 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:50 crc kubenswrapper[4780]: I1205 06:51:50.278809 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:50 crc kubenswrapper[4780]: I1205 06:51:50.451976 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:50 crc kubenswrapper[4780]: I1205 06:51:50.452287 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:50 crc kubenswrapper[4780]: I1205 06:51:50.503221 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w9j9w" Dec 05 06:51:50 crc kubenswrapper[4780]: I1205 06:51:50.506095 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:51 crc kubenswrapper[4780]: I1205 06:51:51.501291 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5hp5n" Dec 05 06:51:59 crc kubenswrapper[4780]: I1205 06:51:59.908216 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:51:59 crc kubenswrapper[4780]: I1205 06:51:59.908494 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:52:03 crc kubenswrapper[4780]: I1205 06:52:03.647949 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" podUID="218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" containerName="registry" containerID="cri-o://4dfe6baf39024118067fbedb8a169e08b0f0b757289a86fa99050ec557d71912" gracePeriod=30 Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.551265 4780 generic.go:334] "Generic (PLEG): container finished" podID="218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" containerID="4dfe6baf39024118067fbedb8a169e08b0f0b757289a86fa99050ec557d71912" exitCode=0 Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.551406 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" event={"ID":"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232","Type":"ContainerDied","Data":"4dfe6baf39024118067fbedb8a169e08b0f0b757289a86fa99050ec557d71912"} Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.889926 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.939525 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-ca-trust-extracted\") pod \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.940227 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.940290 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-tls\") pod \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.940312 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-trusted-ca\") pod \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.940331 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-certificates\") pod \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.940359 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7drjz\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-kube-api-access-7drjz\") pod \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.940401 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-bound-sa-token\") pod \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.940417 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-installation-pull-secrets\") pod \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\" (UID: \"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232\") " Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.941096 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.941736 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.946097 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-kube-api-access-7drjz" (OuterVolumeSpecName: "kube-api-access-7drjz") pod "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232"). InnerVolumeSpecName "kube-api-access-7drjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.958415 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.958602 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.958726 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.963913 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:52:07 crc kubenswrapper[4780]: I1205 06:52:07.966647 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" (UID: "218dfcf2-9d52-4b7a-a8e1-df6ccf40e232"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.042628 4780 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.042919 4780 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.043002 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.043086 4780 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.043181 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7drjz\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-kube-api-access-7drjz\") on node \"crc\" DevicePath \"\"" Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.043266 4780 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.043341 4780 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.558851 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" event={"ID":"218dfcf2-9d52-4b7a-a8e1-df6ccf40e232","Type":"ContainerDied","Data":"88acab8b656b59881ba1a6db2fa80801b675bdf3fd62f1f22a6610e1e95ad61a"} Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.558953 4780 scope.go:117] "RemoveContainer" containerID="4dfe6baf39024118067fbedb8a169e08b0f0b757289a86fa99050ec557d71912" Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.559001 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hcl4v" Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.586325 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hcl4v"] Dec 05 06:52:08 crc kubenswrapper[4780]: I1205 06:52:08.594905 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hcl4v"] Dec 05 06:52:10 crc kubenswrapper[4780]: I1205 06:52:10.145598 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" path="/var/lib/kubelet/pods/218dfcf2-9d52-4b7a-a8e1-df6ccf40e232/volumes" Dec 05 06:52:29 crc kubenswrapper[4780]: I1205 06:52:29.907499 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:52:29 crc kubenswrapper[4780]: I1205 06:52:29.908057 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:52:59 crc kubenswrapper[4780]: I1205 06:52:59.908467 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:52:59 crc kubenswrapper[4780]: I1205 06:52:59.909185 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:52:59 crc kubenswrapper[4780]: I1205 06:52:59.909231 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:52:59 crc kubenswrapper[4780]: I1205 06:52:59.909704 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2111d7a66441e4a5fb1d4b56d7eb9a5373847eff371b366ca8b672572b6996a"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 06:52:59 crc kubenswrapper[4780]: I1205 06:52:59.909748 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://f2111d7a66441e4a5fb1d4b56d7eb9a5373847eff371b366ca8b672572b6996a" gracePeriod=600 Dec 05 06:53:00 crc kubenswrapper[4780]: I1205 06:53:00.874926 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="f2111d7a66441e4a5fb1d4b56d7eb9a5373847eff371b366ca8b672572b6996a" exitCode=0 Dec 05 06:53:00 crc kubenswrapper[4780]: I1205 06:53:00.875045 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"f2111d7a66441e4a5fb1d4b56d7eb9a5373847eff371b366ca8b672572b6996a"} Dec 05 06:53:00 crc kubenswrapper[4780]: I1205 06:53:00.875501 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"212dabc0c5b619fa2c547e0f981407952254f9ee32e03086b047d425e50bb10a"} Dec 05 06:53:00 crc kubenswrapper[4780]: I1205 06:53:00.875526 4780 scope.go:117] "RemoveContainer" containerID="64fcfc2108e79a3e9d4ad38e5cd3ed464b272eba7768cb8e2342dc721761e912" Dec 05 06:55:29 crc kubenswrapper[4780]: I1205 06:55:29.908483 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:55:29 crc kubenswrapper[4780]: I1205 06:55:29.909092 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:55:59 crc kubenswrapper[4780]: I1205 06:55:59.908529 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:55:59 crc kubenswrapper[4780]: I1205 06:55:59.909140 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:56:29 crc kubenswrapper[4780]: I1205 06:56:29.908019 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:56:29 crc kubenswrapper[4780]: I1205 06:56:29.908548 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:56:29 crc kubenswrapper[4780]: I1205 06:56:29.908600 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:56:29 crc kubenswrapper[4780]: I1205 06:56:29.909117 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"212dabc0c5b619fa2c547e0f981407952254f9ee32e03086b047d425e50bb10a"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 06:56:29 crc kubenswrapper[4780]: I1205 06:56:29.909182 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://212dabc0c5b619fa2c547e0f981407952254f9ee32e03086b047d425e50bb10a" gracePeriod=600 Dec 05 06:56:30 crc kubenswrapper[4780]: I1205 06:56:30.244150 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="212dabc0c5b619fa2c547e0f981407952254f9ee32e03086b047d425e50bb10a" exitCode=0 Dec 05 06:56:30 crc kubenswrapper[4780]: I1205 06:56:30.244327 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"212dabc0c5b619fa2c547e0f981407952254f9ee32e03086b047d425e50bb10a"} Dec 05 06:56:30 crc kubenswrapper[4780]: I1205 06:56:30.244765 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"2807931e09acf8b42ad9790918ef7a86372682995b57dd8fe1ed2240e7e7343f"} Dec 05 06:56:30 crc kubenswrapper[4780]: I1205 06:56:30.244802 4780 scope.go:117] "RemoveContainer" containerID="f2111d7a66441e4a5fb1d4b56d7eb9a5373847eff371b366ca8b672572b6996a" Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.447056 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lf5cd"] Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.447989 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovn-controller" containerID="cri-o://8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a" gracePeriod=30 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.448113 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="northd" containerID="cri-o://edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87" gracePeriod=30 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.448095 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="nbdb" containerID="cri-o://69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4" gracePeriod=30 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.448158 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9" gracePeriod=30 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.448178 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="kube-rbac-proxy-node" containerID="cri-o://059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec" gracePeriod=30 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.448218 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovn-acl-logging" containerID="cri-o://66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14" gracePeriod=30 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.448537 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="sbdb" containerID="cri-o://043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd" gracePeriod=30 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.509461 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" containerID="cri-o://fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b" gracePeriod=30 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.806612 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bwf64_74991823-72ec-4b41-bb63-e92307688c30/kube-multus/2.log" Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.807274 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bwf64_74991823-72ec-4b41-bb63-e92307688c30/kube-multus/1.log" Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.807340 4780 generic.go:334] "Generic (PLEG): container finished" podID="74991823-72ec-4b41-bb63-e92307688c30" containerID="cfe468bd75622ef6b9a05d131f22ba9378c87151c68cb9be64e2dca88782ff9a" exitCode=2 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.807426 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bwf64" event={"ID":"74991823-72ec-4b41-bb63-e92307688c30","Type":"ContainerDied","Data":"cfe468bd75622ef6b9a05d131f22ba9378c87151c68cb9be64e2dca88782ff9a"} Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.807493 4780 scope.go:117] "RemoveContainer" containerID="ef75c5deb83c6a3c9fc90cffca01a8b93cef028c02618e73bddf5f757ba4130a" Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.808164 4780 scope.go:117] "RemoveContainer" containerID="cfe468bd75622ef6b9a05d131f22ba9378c87151c68cb9be64e2dca88782ff9a" Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.809796 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/3.log" Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.813309 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovn-acl-logging/0.log" Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.814054 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovn-controller/0.log" Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.814656 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9" exitCode=0 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.814684 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec" exitCode=0 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.814696 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14" exitCode=143 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.814706 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a" exitCode=143 Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.814723 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9"} Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.814743 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec"} Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.814756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14"} Dec 05 06:58:18 crc kubenswrapper[4780]: I1205 06:58:18.814768 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a"} Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.296187 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/3.log" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.298633 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovn-acl-logging/0.log" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.299152 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovn-controller/0.log" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.299548 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.344824 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wdk2p"] Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345034 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovn-acl-logging" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345047 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovn-acl-logging" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345056 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345064 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345074 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="nbdb" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345080 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="nbdb" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345086 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345092 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345099 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345105 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345114 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="sbdb" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345120 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="sbdb" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345131 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="kubecfg-setup" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345138 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="kubecfg-setup" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345149 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345155 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345163 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovn-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345169 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovn-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345179 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" containerName="registry" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345185 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" containerName="registry" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345193 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="northd" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345198 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="northd" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345206 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="kube-rbac-proxy-node" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345212 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="kube-rbac-proxy-node" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345297 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovn-acl-logging" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345308 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="218dfcf2-9d52-4b7a-a8e1-df6ccf40e232" containerName="registry" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345316 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345322 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345329 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="sbdb" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345336 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovn-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345343 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345349 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345355 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="kube-rbac-proxy-node" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345360 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345367 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="northd" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345374 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="nbdb" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345467 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345475 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.345483 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345489 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.345573 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerName="ovnkube-controller" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.347147 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.418635 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-systemd\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.418686 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-openvswitch\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.418774 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61c4a70b-17c4-4f09-a541-5161825c4c03-ovn-node-metrics-cert\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.418839 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-netns\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.418899 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-etc-openvswitch\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.418922 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-var-lib-openvswitch\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.418915 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.418943 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-node-log\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.418986 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-config\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419018 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-netd\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419040 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-kubelet\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419062 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-bin\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419099 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-slash\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419132 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-log-socket\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419154 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-systemd-units\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419184 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-script-lib\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419210 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-var-lib-cni-networks-ovn-kubernetes\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419232 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrsmk\" (UniqueName: \"kubernetes.io/projected/61c4a70b-17c4-4f09-a541-5161825c4c03-kube-api-access-qrsmk\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419260 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-ovn\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419282 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-ovn-kubernetes\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419309 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-env-overrides\") pod \"61c4a70b-17c4-4f09-a541-5161825c4c03\" (UID: \"61c4a70b-17c4-4f09-a541-5161825c4c03\") " Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419448 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419479 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-log-socket\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419495 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mszc\" (UniqueName: \"kubernetes.io/projected/d7fde17c-17fa-439a-b759-b30e5d4514e9-kube-api-access-2mszc\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419517 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-run-netns\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419534 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-var-lib-openvswitch\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.418984 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419649 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419678 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419699 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419719 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-node-log" (OuterVolumeSpecName: "node-log") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.419779 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420070 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420132 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420145 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420167 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420190 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-slash" (OuterVolumeSpecName: "host-slash") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420214 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-log-socket" (OuterVolumeSpecName: "log-socket") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420238 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420246 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420287 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420371 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7fde17c-17fa-439a-b759-b30e5d4514e9-ovn-node-metrics-cert\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420399 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-run-openvswitch\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420435 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-systemd-units\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.421781 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-node-log\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.421866 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-kubelet\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.421927 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7fde17c-17fa-439a-b759-b30e5d4514e9-env-overrides\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.421953 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-run-ovn\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.421987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.420732 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422115 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7fde17c-17fa-439a-b759-b30e5d4514e9-ovnkube-script-lib\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422194 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-run-systemd\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422213 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-etc-openvswitch\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422228 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7fde17c-17fa-439a-b759-b30e5d4514e9-ovnkube-config\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422252 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-slash\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422283 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-cni-netd\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422302 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-cni-bin\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422360 4780 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422371 4780 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422380 4780 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422389 4780 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422397 4780 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-node-log\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422405 4780 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422413 4780 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422421 4780 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422430 4780 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422444 4780 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-slash\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422455 4780 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-log-socket\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422515 4780 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422533 4780 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422544 4780 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422555 4780 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422567 4780 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.422578 4780 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61c4a70b-17c4-4f09-a541-5161825c4c03-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.425392 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c4a70b-17c4-4f09-a541-5161825c4c03-kube-api-access-qrsmk" (OuterVolumeSpecName: "kube-api-access-qrsmk") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "kube-api-access-qrsmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.425940 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c4a70b-17c4-4f09-a541-5161825c4c03-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.434523 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "61c4a70b-17c4-4f09-a541-5161825c4c03" (UID: "61c4a70b-17c4-4f09-a541-5161825c4c03"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523036 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-kubelet\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523085 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7fde17c-17fa-439a-b759-b30e5d4514e9-env-overrides\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523109 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-run-ovn\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523125 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523149 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7fde17c-17fa-439a-b759-b30e5d4514e9-ovnkube-script-lib\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523169 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-run-systemd\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523188 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-etc-openvswitch\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523208 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7fde17c-17fa-439a-b759-b30e5d4514e9-ovnkube-config\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523230 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-slash\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523247 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-cni-netd\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523263 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-cni-bin\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523278 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523295 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-log-socket\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523310 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mszc\" (UniqueName: \"kubernetes.io/projected/d7fde17c-17fa-439a-b759-b30e5d4514e9-kube-api-access-2mszc\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523340 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-var-lib-openvswitch\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523355 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-run-netns\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523372 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7fde17c-17fa-439a-b759-b30e5d4514e9-ovn-node-metrics-cert\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523387 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-run-openvswitch\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523407 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-systemd-units\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523428 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-node-log\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523514 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61c4a70b-17c4-4f09-a541-5161825c4c03-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523527 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrsmk\" (UniqueName: \"kubernetes.io/projected/61c4a70b-17c4-4f09-a541-5161825c4c03-kube-api-access-qrsmk\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523538 4780 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61c4a70b-17c4-4f09-a541-5161825c4c03-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523575 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-node-log\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.523606 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-kubelet\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.524442 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d7fde17c-17fa-439a-b759-b30e5d4514e9-env-overrides\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.524477 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-run-ovn\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.524500 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.524990 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d7fde17c-17fa-439a-b759-b30e5d4514e9-ovnkube-script-lib\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.525030 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-run-systemd\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.525053 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-etc-openvswitch\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.525425 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d7fde17c-17fa-439a-b759-b30e5d4514e9-ovnkube-config\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.525462 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-slash\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.525483 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-cni-netd\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.525503 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-cni-bin\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.525523 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.525546 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-log-socket\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.525779 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-var-lib-openvswitch\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.525807 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-host-run-netns\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.526244 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-run-openvswitch\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.526337 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d7fde17c-17fa-439a-b759-b30e5d4514e9-systemd-units\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.529375 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d7fde17c-17fa-439a-b759-b30e5d4514e9-ovn-node-metrics-cert\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.540355 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mszc\" (UniqueName: \"kubernetes.io/projected/d7fde17c-17fa-439a-b759-b30e5d4514e9-kube-api-access-2mszc\") pod \"ovnkube-node-wdk2p\" (UID: \"d7fde17c-17fa-439a-b759-b30e5d4514e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.662437 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:19 crc kubenswrapper[4780]: W1205 06:58:19.683341 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7fde17c_17fa_439a_b759_b30e5d4514e9.slice/crio-3fa4ce4f913c10702557f5548ad487b278b6a5be32a562753bb1b19b2e038a0e WatchSource:0}: Error finding container 3fa4ce4f913c10702557f5548ad487b278b6a5be32a562753bb1b19b2e038a0e: Status 404 returned error can't find the container with id 3fa4ce4f913c10702557f5548ad487b278b6a5be32a562753bb1b19b2e038a0e Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.822978 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovnkube-controller/3.log" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.825663 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovn-acl-logging/0.log" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.826603 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lf5cd_61c4a70b-17c4-4f09-a541-5161825c4c03/ovn-controller/0.log" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827095 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b" exitCode=0 Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827158 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b"} Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827229 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd"} Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827185 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827182 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd" exitCode=0 Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827306 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4" exitCode=0 Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827323 4780 generic.go:334] "Generic (PLEG): container finished" podID="61c4a70b-17c4-4f09-a541-5161825c4c03" containerID="edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87" exitCode=0 Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827368 4780 scope.go:117] "RemoveContainer" containerID="fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827396 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4"} Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827428 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87"} Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.827442 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lf5cd" event={"ID":"61c4a70b-17c4-4f09-a541-5161825c4c03","Type":"ContainerDied","Data":"5643df53fea9a8073141ce9ccf15df7fd98a6f854d13400afaed824f42b9c215"} Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.839136 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bwf64_74991823-72ec-4b41-bb63-e92307688c30/kube-multus/2.log" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.839261 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bwf64" event={"ID":"74991823-72ec-4b41-bb63-e92307688c30","Type":"ContainerStarted","Data":"93a2e04287091f42a222a16ab1971a7408a0880d99433312bedfc58f7654c545"} Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.840801 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" event={"ID":"d7fde17c-17fa-439a-b759-b30e5d4514e9","Type":"ContainerStarted","Data":"3fa4ce4f913c10702557f5548ad487b278b6a5be32a562753bb1b19b2e038a0e"} Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.849099 4780 scope.go:117] "RemoveContainer" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.869173 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lf5cd"] Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.874689 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lf5cd"] Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.874827 4780 scope.go:117] "RemoveContainer" containerID="043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.902610 4780 scope.go:117] "RemoveContainer" containerID="69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.916373 4780 scope.go:117] "RemoveContainer" containerID="edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.927693 4780 scope.go:117] "RemoveContainer" containerID="f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.942044 4780 scope.go:117] "RemoveContainer" containerID="059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.957526 4780 scope.go:117] "RemoveContainer" containerID="66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.969618 4780 scope.go:117] "RemoveContainer" containerID="8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.983749 4780 scope.go:117] "RemoveContainer" containerID="680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.995047 4780 scope.go:117] "RemoveContainer" containerID="fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.995369 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b\": container with ID starting with fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b not found: ID does not exist" containerID="fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.995402 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b"} err="failed to get container status \"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b\": rpc error: code = NotFound desc = could not find container \"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b\": container with ID starting with fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b not found: ID does not exist" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.995425 4780 scope.go:117] "RemoveContainer" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.995751 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\": container with ID starting with 6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8 not found: ID does not exist" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.995783 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8"} err="failed to get container status \"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\": rpc error: code = NotFound desc = could not find container \"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\": container with ID starting with 6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8 not found: ID does not exist" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.995810 4780 scope.go:117] "RemoveContainer" containerID="043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.996333 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\": container with ID starting with 043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd not found: ID does not exist" containerID="043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.996358 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd"} err="failed to get container status \"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\": rpc error: code = NotFound desc = could not find container \"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\": container with ID starting with 043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd not found: ID does not exist" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.996376 4780 scope.go:117] "RemoveContainer" containerID="69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.996630 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\": container with ID starting with 69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4 not found: ID does not exist" containerID="69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.996657 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4"} err="failed to get container status \"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\": rpc error: code = NotFound desc = could not find container \"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\": container with ID starting with 69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4 not found: ID does not exist" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.996671 4780 scope.go:117] "RemoveContainer" containerID="edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.996929 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\": container with ID starting with edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87 not found: ID does not exist" containerID="edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.996953 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87"} err="failed to get container status \"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\": rpc error: code = NotFound desc = could not find container \"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\": container with ID starting with edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87 not found: ID does not exist" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.996965 4780 scope.go:117] "RemoveContainer" containerID="f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.997219 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\": container with ID starting with f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9 not found: ID does not exist" containerID="f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.997248 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9"} err="failed to get container status \"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\": rpc error: code = NotFound desc = could not find container \"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\": container with ID starting with f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9 not found: ID does not exist" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.997270 4780 scope.go:117] "RemoveContainer" containerID="059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.997504 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\": container with ID starting with 059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec not found: ID does not exist" containerID="059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.997558 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec"} err="failed to get container status \"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\": rpc error: code = NotFound desc = could not find container \"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\": container with ID starting with 059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec not found: ID does not exist" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.997585 4780 scope.go:117] "RemoveContainer" containerID="66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.997946 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\": container with ID starting with 66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14 not found: ID does not exist" containerID="66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.997966 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14"} err="failed to get container status \"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\": rpc error: code = NotFound desc = could not find container \"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\": container with ID starting with 66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14 not found: ID does not exist" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.997983 4780 scope.go:117] "RemoveContainer" containerID="8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.998353 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\": container with ID starting with 8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a not found: ID does not exist" containerID="8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.998374 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a"} err="failed to get container status \"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\": rpc error: code = NotFound desc = could not find container \"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\": container with ID starting with 8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a not found: ID does not exist" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.998393 4780 scope.go:117] "RemoveContainer" containerID="680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a" Dec 05 06:58:19 crc kubenswrapper[4780]: E1205 06:58:19.998790 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\": container with ID starting with 680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a not found: ID does not exist" containerID="680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.998830 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a"} err="failed to get container status \"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\": rpc error: code = NotFound desc = could not find container \"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\": container with ID starting with 680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a not found: ID does not exist" Dec 05 06:58:19 crc kubenswrapper[4780]: I1205 06:58:19.998857 4780 scope.go:117] "RemoveContainer" containerID="fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.000643 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b"} err="failed to get container status \"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b\": rpc error: code = NotFound desc = could not find container \"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b\": container with ID starting with fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.000710 4780 scope.go:117] "RemoveContainer" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.000978 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8"} err="failed to get container status \"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\": rpc error: code = NotFound desc = could not find container \"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\": container with ID starting with 6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.001008 4780 scope.go:117] "RemoveContainer" containerID="043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.001536 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd"} err="failed to get container status \"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\": rpc error: code = NotFound desc = could not find container \"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\": container with ID starting with 043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.001621 4780 scope.go:117] "RemoveContainer" containerID="69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.002058 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4"} err="failed to get container status \"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\": rpc error: code = NotFound desc = could not find container \"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\": container with ID starting with 69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.002112 4780 scope.go:117] "RemoveContainer" containerID="edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.002518 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87"} err="failed to get container status \"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\": rpc error: code = NotFound desc = could not find container \"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\": container with ID starting with edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.002549 4780 scope.go:117] "RemoveContainer" containerID="f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.003158 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9"} err="failed to get container status \"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\": rpc error: code = NotFound desc = could not find container \"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\": container with ID starting with f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.003182 4780 scope.go:117] "RemoveContainer" containerID="059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.003600 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec"} err="failed to get container status \"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\": rpc error: code = NotFound desc = could not find container \"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\": container with ID starting with 059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.003624 4780 scope.go:117] "RemoveContainer" containerID="66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.004231 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14"} err="failed to get container status \"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\": rpc error: code = NotFound desc = could not find container \"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\": container with ID starting with 66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.004257 4780 scope.go:117] "RemoveContainer" containerID="8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.004809 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a"} err="failed to get container status \"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\": rpc error: code = NotFound desc = could not find container \"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\": container with ID starting with 8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.004841 4780 scope.go:117] "RemoveContainer" containerID="680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.006627 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a"} err="failed to get container status \"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\": rpc error: code = NotFound desc = could not find container \"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\": container with ID starting with 680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.006661 4780 scope.go:117] "RemoveContainer" containerID="fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.007101 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b"} err="failed to get container status \"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b\": rpc error: code = NotFound desc = could not find container \"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b\": container with ID starting with fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.007121 4780 scope.go:117] "RemoveContainer" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.007448 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8"} err="failed to get container status \"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\": rpc error: code = NotFound desc = could not find container \"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\": container with ID starting with 6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.007469 4780 scope.go:117] "RemoveContainer" containerID="043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.007764 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd"} err="failed to get container status \"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\": rpc error: code = NotFound desc = could not find container \"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\": container with ID starting with 043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.007785 4780 scope.go:117] "RemoveContainer" containerID="69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.008144 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4"} err="failed to get container status \"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\": rpc error: code = NotFound desc = could not find container \"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\": container with ID starting with 69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.008163 4780 scope.go:117] "RemoveContainer" containerID="edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.008417 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87"} err="failed to get container status \"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\": rpc error: code = NotFound desc = could not find container \"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\": container with ID starting with edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.008454 4780 scope.go:117] "RemoveContainer" containerID="f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.008830 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9"} err="failed to get container status \"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\": rpc error: code = NotFound desc = could not find container \"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\": container with ID starting with f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.008850 4780 scope.go:117] "RemoveContainer" containerID="059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.009142 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec"} err="failed to get container status \"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\": rpc error: code = NotFound desc = could not find container \"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\": container with ID starting with 059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.009169 4780 scope.go:117] "RemoveContainer" containerID="66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.009540 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14"} err="failed to get container status \"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\": rpc error: code = NotFound desc = could not find container \"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\": container with ID starting with 66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.009576 4780 scope.go:117] "RemoveContainer" containerID="8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.009915 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a"} err="failed to get container status \"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\": rpc error: code = NotFound desc = could not find container \"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\": container with ID starting with 8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.009937 4780 scope.go:117] "RemoveContainer" containerID="680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.010168 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a"} err="failed to get container status \"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\": rpc error: code = NotFound desc = could not find container \"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\": container with ID starting with 680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.010191 4780 scope.go:117] "RemoveContainer" containerID="fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.010509 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b"} err="failed to get container status \"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b\": rpc error: code = NotFound desc = could not find container \"fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b\": container with ID starting with fd7108c99a76b3595b4f92159986afa6bf6038382bc27cf4a35e775f3133a04b not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.010538 4780 scope.go:117] "RemoveContainer" containerID="6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.010758 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8"} err="failed to get container status \"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\": rpc error: code = NotFound desc = could not find container \"6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8\": container with ID starting with 6d8ce37881df19f22ea2df55c7681d717249c79bcecbb8a2b4d1a7303d2dabd8 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.010780 4780 scope.go:117] "RemoveContainer" containerID="043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.011042 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd"} err="failed to get container status \"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\": rpc error: code = NotFound desc = could not find container \"043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd\": container with ID starting with 043c3de38a1b8f2fba2ce0fda0d479489d2a77e02e1bb753c4d85c753c1245cd not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.011088 4780 scope.go:117] "RemoveContainer" containerID="69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.011352 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4"} err="failed to get container status \"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\": rpc error: code = NotFound desc = could not find container \"69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4\": container with ID starting with 69dedc4a86ca55584e356accc102608994d3b64df80a9812adc013afd82d2df4 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.011372 4780 scope.go:117] "RemoveContainer" containerID="edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.011580 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87"} err="failed to get container status \"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\": rpc error: code = NotFound desc = could not find container \"edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87\": container with ID starting with edd331fa75d56a329c3df550184d1ad5e4b15df0d7adfc00dc21744707bd6a87 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.011601 4780 scope.go:117] "RemoveContainer" containerID="f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.011807 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9"} err="failed to get container status \"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\": rpc error: code = NotFound desc = could not find container \"f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9\": container with ID starting with f0dac5b0a1bf32cf95845c2722b8086bc6cff62e72d7b7bd0e7f6be92fda06c9 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.011826 4780 scope.go:117] "RemoveContainer" containerID="059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.012059 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec"} err="failed to get container status \"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\": rpc error: code = NotFound desc = could not find container \"059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec\": container with ID starting with 059e3a424ae4ce15953be79e8f4438fed1e044352f20cac3097089737eb3c0ec not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.012076 4780 scope.go:117] "RemoveContainer" containerID="66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.012349 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14"} err="failed to get container status \"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\": rpc error: code = NotFound desc = could not find container \"66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14\": container with ID starting with 66981825f8c855938be10c5e76bd04ee275fc91084c13ba601dcecdd44a59c14 not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.012369 4780 scope.go:117] "RemoveContainer" containerID="8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.012617 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a"} err="failed to get container status \"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\": rpc error: code = NotFound desc = could not find container \"8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a\": container with ID starting with 8ec2c70d529e11c07c7ea6bd61159ebce95887c4481fc602f05ced444274081a not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.012652 4780 scope.go:117] "RemoveContainer" containerID="680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.013041 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a"} err="failed to get container status \"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\": rpc error: code = NotFound desc = could not find container \"680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a\": container with ID starting with 680e68daf840f278cdc00f69cb358a055611dfd604e87dae17b2ad18d769f68a not found: ID does not exist" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.146318 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c4a70b-17c4-4f09-a541-5161825c4c03" path="/var/lib/kubelet/pods/61c4a70b-17c4-4f09-a541-5161825c4c03/volumes" Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.850506 4780 generic.go:334] "Generic (PLEG): container finished" podID="d7fde17c-17fa-439a-b759-b30e5d4514e9" containerID="a88616641c120d4c3f4c5d7280228e0f3dc71c8d646ac1960d42a0d2376374a8" exitCode=0 Dec 05 06:58:20 crc kubenswrapper[4780]: I1205 06:58:20.850846 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" event={"ID":"d7fde17c-17fa-439a-b759-b30e5d4514e9","Type":"ContainerDied","Data":"a88616641c120d4c3f4c5d7280228e0f3dc71c8d646ac1960d42a0d2376374a8"} Dec 05 06:58:21 crc kubenswrapper[4780]: I1205 06:58:21.859188 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" event={"ID":"d7fde17c-17fa-439a-b759-b30e5d4514e9","Type":"ContainerStarted","Data":"29424b5515d0f834f91fa220276c2777e65f225c26cc0e22b6d928fefdf40b2a"} Dec 05 06:58:21 crc kubenswrapper[4780]: I1205 06:58:21.859832 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" event={"ID":"d7fde17c-17fa-439a-b759-b30e5d4514e9","Type":"ContainerStarted","Data":"4c5bc3b55fd6c94f01010173499da9ae06de8b893ec131e688af12ea4ea83297"} Dec 05 06:58:21 crc kubenswrapper[4780]: I1205 06:58:21.859857 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" event={"ID":"d7fde17c-17fa-439a-b759-b30e5d4514e9","Type":"ContainerStarted","Data":"1c15e67843ab171bfe639a10d1db21a997e8be08db5c3a88878e34dae9ed5feb"} Dec 05 06:58:21 crc kubenswrapper[4780]: I1205 06:58:21.859911 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" event={"ID":"d7fde17c-17fa-439a-b759-b30e5d4514e9","Type":"ContainerStarted","Data":"9ec9b8376054322886efbe7fe4f0abcb3e72a0caa2d935e2d822fdc015a9ceb2"} Dec 05 06:58:21 crc kubenswrapper[4780]: I1205 06:58:21.859930 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" event={"ID":"d7fde17c-17fa-439a-b759-b30e5d4514e9","Type":"ContainerStarted","Data":"c43ec3e8ef377a658a2dd8f5d9153d46c7e733f0a57175caca77532a84b742b5"} Dec 05 06:58:21 crc kubenswrapper[4780]: I1205 06:58:21.859949 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" event={"ID":"d7fde17c-17fa-439a-b759-b30e5d4514e9","Type":"ContainerStarted","Data":"13d793cd127d875acd73c15e9c1b4d2224c4e05bead054a8dff92552982c5404"} Dec 05 06:58:23 crc kubenswrapper[4780]: I1205 06:58:23.875908 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" event={"ID":"d7fde17c-17fa-439a-b759-b30e5d4514e9","Type":"ContainerStarted","Data":"a30cf9b4f97a713816dadc71c977036e1376f697a5d762d6b249c06036e4907e"} Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.608781 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-krvz9"] Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.609982 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.612273 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.612394 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.612498 4780 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5555f" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.613755 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.804072 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8e573324-e383-4147-a05d-57261c0d5645-crc-storage\") pod \"crc-storage-crc-krvz9\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.804423 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7zcn\" (UniqueName: \"kubernetes.io/projected/8e573324-e383-4147-a05d-57261c0d5645-kube-api-access-w7zcn\") pod \"crc-storage-crc-krvz9\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.804514 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8e573324-e383-4147-a05d-57261c0d5645-node-mnt\") pod \"crc-storage-crc-krvz9\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.906225 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8e573324-e383-4147-a05d-57261c0d5645-crc-storage\") pod \"crc-storage-crc-krvz9\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.906264 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7zcn\" (UniqueName: \"kubernetes.io/projected/8e573324-e383-4147-a05d-57261c0d5645-kube-api-access-w7zcn\") pod \"crc-storage-crc-krvz9\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.906301 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8e573324-e383-4147-a05d-57261c0d5645-node-mnt\") pod \"crc-storage-crc-krvz9\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.906534 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8e573324-e383-4147-a05d-57261c0d5645-node-mnt\") pod \"crc-storage-crc-krvz9\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.907147 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8e573324-e383-4147-a05d-57261c0d5645-crc-storage\") pod \"crc-storage-crc-krvz9\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.921511 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7zcn\" (UniqueName: \"kubernetes.io/projected/8e573324-e383-4147-a05d-57261c0d5645-kube-api-access-w7zcn\") pod \"crc-storage-crc-krvz9\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: I1205 06:58:25.928372 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: E1205 06:58:25.947773 4780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-krvz9_crc-storage_8e573324-e383-4147-a05d-57261c0d5645_0(6e1b5a33604c07cd823cb56fb1d4184225fbc809d851f38bab81914b786ac4e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 06:58:25 crc kubenswrapper[4780]: E1205 06:58:25.947835 4780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-krvz9_crc-storage_8e573324-e383-4147-a05d-57261c0d5645_0(6e1b5a33604c07cd823cb56fb1d4184225fbc809d851f38bab81914b786ac4e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: E1205 06:58:25.947857 4780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-krvz9_crc-storage_8e573324-e383-4147-a05d-57261c0d5645_0(6e1b5a33604c07cd823cb56fb1d4184225fbc809d851f38bab81914b786ac4e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:25 crc kubenswrapper[4780]: E1205 06:58:25.947924 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-krvz9_crc-storage(8e573324-e383-4147-a05d-57261c0d5645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-krvz9_crc-storage(8e573324-e383-4147-a05d-57261c0d5645)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-krvz9_crc-storage_8e573324-e383-4147-a05d-57261c0d5645_0(6e1b5a33604c07cd823cb56fb1d4184225fbc809d851f38bab81914b786ac4e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-krvz9" podUID="8e573324-e383-4147-a05d-57261c0d5645" Dec 05 06:58:26 crc kubenswrapper[4780]: I1205 06:58:26.900759 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" event={"ID":"d7fde17c-17fa-439a-b759-b30e5d4514e9","Type":"ContainerStarted","Data":"8155d90b37537f20b3776e5c2e2eed9505fddaec88515ff2417946fe36677b8e"} Dec 05 06:58:27 crc kubenswrapper[4780]: I1205 06:58:27.558409 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-krvz9"] Dec 05 06:58:27 crc kubenswrapper[4780]: I1205 06:58:27.558523 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:27 crc kubenswrapper[4780]: I1205 06:58:27.558974 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:27 crc kubenswrapper[4780]: E1205 06:58:27.581224 4780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-krvz9_crc-storage_8e573324-e383-4147-a05d-57261c0d5645_0(588c01ebd1cba26bfb95d81ad4edc068c7a8b19d55da05a4051ab3d0d3551a77): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 06:58:27 crc kubenswrapper[4780]: E1205 06:58:27.581283 4780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-krvz9_crc-storage_8e573324-e383-4147-a05d-57261c0d5645_0(588c01ebd1cba26bfb95d81ad4edc068c7a8b19d55da05a4051ab3d0d3551a77): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:27 crc kubenswrapper[4780]: E1205 06:58:27.581305 4780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-krvz9_crc-storage_8e573324-e383-4147-a05d-57261c0d5645_0(588c01ebd1cba26bfb95d81ad4edc068c7a8b19d55da05a4051ab3d0d3551a77): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:27 crc kubenswrapper[4780]: E1205 06:58:27.581391 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-krvz9_crc-storage(8e573324-e383-4147-a05d-57261c0d5645)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-krvz9_crc-storage(8e573324-e383-4147-a05d-57261c0d5645)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-krvz9_crc-storage_8e573324-e383-4147-a05d-57261c0d5645_0(588c01ebd1cba26bfb95d81ad4edc068c7a8b19d55da05a4051ab3d0d3551a77): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-krvz9" podUID="8e573324-e383-4147-a05d-57261c0d5645" Dec 05 06:58:27 crc kubenswrapper[4780]: I1205 06:58:27.905490 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:27 crc kubenswrapper[4780]: I1205 06:58:27.905524 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:27 crc kubenswrapper[4780]: I1205 06:58:27.930626 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:27 crc kubenswrapper[4780]: I1205 06:58:27.936785 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" podStartSLOduration=8.936766141 podStartE2EDuration="8.936766141s" podCreationTimestamp="2025-12-05 06:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:58:27.930972726 +0000 UTC m=+742.000489058" watchObservedRunningTime="2025-12-05 06:58:27.936766141 +0000 UTC m=+742.006282463" Dec 05 06:58:28 crc kubenswrapper[4780]: I1205 06:58:28.910260 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:28 crc kubenswrapper[4780]: I1205 06:58:28.933676 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:32 crc kubenswrapper[4780]: I1205 06:58:32.786469 4780 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 06:58:42 crc kubenswrapper[4780]: I1205 06:58:42.138627 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:42 crc kubenswrapper[4780]: I1205 06:58:42.139573 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:42 crc kubenswrapper[4780]: I1205 06:58:42.347803 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-krvz9"] Dec 05 06:58:42 crc kubenswrapper[4780]: W1205 06:58:42.353305 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e573324_e383_4147_a05d_57261c0d5645.slice/crio-b374facaf51485b21fc3f744bb21a37df73bc2fe4e226949e3ce60a195dc00af WatchSource:0}: Error finding container b374facaf51485b21fc3f744bb21a37df73bc2fe4e226949e3ce60a195dc00af: Status 404 returned error can't find the container with id b374facaf51485b21fc3f744bb21a37df73bc2fe4e226949e3ce60a195dc00af Dec 05 06:58:42 crc kubenswrapper[4780]: I1205 06:58:42.355474 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 06:58:42 crc kubenswrapper[4780]: I1205 06:58:42.976391 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-krvz9" event={"ID":"8e573324-e383-4147-a05d-57261c0d5645","Type":"ContainerStarted","Data":"b374facaf51485b21fc3f744bb21a37df73bc2fe4e226949e3ce60a195dc00af"} Dec 05 06:58:44 crc kubenswrapper[4780]: I1205 06:58:44.988904 4780 generic.go:334] "Generic (PLEG): container finished" podID="8e573324-e383-4147-a05d-57261c0d5645" containerID="7b19da8777ffdc66d66308af0b7e15972540dc8726fdfe9e7aa30fe9b8d504fd" exitCode=0 Dec 05 06:58:44 crc kubenswrapper[4780]: I1205 06:58:44.988949 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-krvz9" event={"ID":"8e573324-e383-4147-a05d-57261c0d5645","Type":"ContainerDied","Data":"7b19da8777ffdc66d66308af0b7e15972540dc8726fdfe9e7aa30fe9b8d504fd"} Dec 05 06:58:46 crc kubenswrapper[4780]: I1205 06:58:46.227494 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:46 crc kubenswrapper[4780]: I1205 06:58:46.405853 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8e573324-e383-4147-a05d-57261c0d5645-crc-storage\") pod \"8e573324-e383-4147-a05d-57261c0d5645\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " Dec 05 06:58:46 crc kubenswrapper[4780]: I1205 06:58:46.405936 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7zcn\" (UniqueName: \"kubernetes.io/projected/8e573324-e383-4147-a05d-57261c0d5645-kube-api-access-w7zcn\") pod \"8e573324-e383-4147-a05d-57261c0d5645\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " Dec 05 06:58:46 crc kubenswrapper[4780]: I1205 06:58:46.406008 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8e573324-e383-4147-a05d-57261c0d5645-node-mnt\") pod \"8e573324-e383-4147-a05d-57261c0d5645\" (UID: \"8e573324-e383-4147-a05d-57261c0d5645\") " Dec 05 06:58:46 crc kubenswrapper[4780]: I1205 06:58:46.406289 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e573324-e383-4147-a05d-57261c0d5645-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "8e573324-e383-4147-a05d-57261c0d5645" (UID: "8e573324-e383-4147-a05d-57261c0d5645"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 06:58:46 crc kubenswrapper[4780]: I1205 06:58:46.410379 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e573324-e383-4147-a05d-57261c0d5645-kube-api-access-w7zcn" (OuterVolumeSpecName: "kube-api-access-w7zcn") pod "8e573324-e383-4147-a05d-57261c0d5645" (UID: "8e573324-e383-4147-a05d-57261c0d5645"). InnerVolumeSpecName "kube-api-access-w7zcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:58:46 crc kubenswrapper[4780]: I1205 06:58:46.424496 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e573324-e383-4147-a05d-57261c0d5645-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "8e573324-e383-4147-a05d-57261c0d5645" (UID: "8e573324-e383-4147-a05d-57261c0d5645"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:58:46 crc kubenswrapper[4780]: I1205 06:58:46.507602 4780 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8e573324-e383-4147-a05d-57261c0d5645-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:46 crc kubenswrapper[4780]: I1205 06:58:46.507639 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7zcn\" (UniqueName: \"kubernetes.io/projected/8e573324-e383-4147-a05d-57261c0d5645-kube-api-access-w7zcn\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:46 crc kubenswrapper[4780]: I1205 06:58:46.507651 4780 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8e573324-e383-4147-a05d-57261c0d5645-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 05 06:58:47 crc kubenswrapper[4780]: I1205 06:58:47.000639 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-krvz9" event={"ID":"8e573324-e383-4147-a05d-57261c0d5645","Type":"ContainerDied","Data":"b374facaf51485b21fc3f744bb21a37df73bc2fe4e226949e3ce60a195dc00af"} Dec 05 06:58:47 crc kubenswrapper[4780]: I1205 06:58:47.000966 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b374facaf51485b21fc3f744bb21a37df73bc2fe4e226949e3ce60a195dc00af" Dec 05 06:58:47 crc kubenswrapper[4780]: I1205 06:58:47.000699 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-krvz9" Dec 05 06:58:49 crc kubenswrapper[4780]: I1205 06:58:49.688416 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wdk2p" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.317441 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7"] Dec 05 06:58:54 crc kubenswrapper[4780]: E1205 06:58:54.318253 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e573324-e383-4147-a05d-57261c0d5645" containerName="storage" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.318269 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e573324-e383-4147-a05d-57261c0d5645" containerName="storage" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.318398 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e573324-e383-4147-a05d-57261c0d5645" containerName="storage" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.319103 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.321291 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.334686 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7"] Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.501937 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.502007 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v56zl\" (UniqueName: \"kubernetes.io/projected/bb460ef0-02da-47c6-81c5-4c6ccc81e705-kube-api-access-v56zl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.502057 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.603467 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.603565 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v56zl\" (UniqueName: \"kubernetes.io/projected/bb460ef0-02da-47c6-81c5-4c6ccc81e705-kube-api-access-v56zl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.603623 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.604058 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.604109 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.622984 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v56zl\" (UniqueName: \"kubernetes.io/projected/bb460ef0-02da-47c6-81c5-4c6ccc81e705-kube-api-access-v56zl\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:54 crc kubenswrapper[4780]: I1205 06:58:54.645839 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:58:55 crc kubenswrapper[4780]: I1205 06:58:55.075066 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7"] Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.039774 4780 generic.go:334] "Generic (PLEG): container finished" podID="bb460ef0-02da-47c6-81c5-4c6ccc81e705" containerID="3429ec46bf1294a4dda4b94da5192f2d5714d7e80a9e7fd7472098624ea81155" exitCode=0 Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.039831 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" event={"ID":"bb460ef0-02da-47c6-81c5-4c6ccc81e705","Type":"ContainerDied","Data":"3429ec46bf1294a4dda4b94da5192f2d5714d7e80a9e7fd7472098624ea81155"} Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.040107 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" event={"ID":"bb460ef0-02da-47c6-81c5-4c6ccc81e705","Type":"ContainerStarted","Data":"99c57f228b073907973f169ebc22dfbe1a9c7bd0151c9adcf33ac6daf581e19d"} Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.674415 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vk5tv"] Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.682176 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.706845 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vk5tv"] Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.831519 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-utilities\") pod \"redhat-operators-vk5tv\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.831600 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5vms\" (UniqueName: \"kubernetes.io/projected/674412c4-2c0c-4c39-b070-8e16e43a0e16-kube-api-access-n5vms\") pod \"redhat-operators-vk5tv\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.831642 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-catalog-content\") pod \"redhat-operators-vk5tv\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.932821 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-utilities\") pod \"redhat-operators-vk5tv\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.932890 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5vms\" (UniqueName: \"kubernetes.io/projected/674412c4-2c0c-4c39-b070-8e16e43a0e16-kube-api-access-n5vms\") pod \"redhat-operators-vk5tv\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.932918 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-catalog-content\") pod \"redhat-operators-vk5tv\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.933290 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-utilities\") pod \"redhat-operators-vk5tv\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.933295 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-catalog-content\") pod \"redhat-operators-vk5tv\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:56 crc kubenswrapper[4780]: I1205 06:58:56.950252 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5vms\" (UniqueName: \"kubernetes.io/projected/674412c4-2c0c-4c39-b070-8e16e43a0e16-kube-api-access-n5vms\") pod \"redhat-operators-vk5tv\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:57 crc kubenswrapper[4780]: I1205 06:58:57.006587 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:58:57 crc kubenswrapper[4780]: I1205 06:58:57.196527 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vk5tv"] Dec 05 06:58:58 crc kubenswrapper[4780]: I1205 06:58:58.052493 4780 generic.go:334] "Generic (PLEG): container finished" podID="bb460ef0-02da-47c6-81c5-4c6ccc81e705" containerID="5b0159f2d3d4a60645350e858773f30ffac668d5c38103cd65126236b9e62722" exitCode=0 Dec 05 06:58:58 crc kubenswrapper[4780]: I1205 06:58:58.053040 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" event={"ID":"bb460ef0-02da-47c6-81c5-4c6ccc81e705","Type":"ContainerDied","Data":"5b0159f2d3d4a60645350e858773f30ffac668d5c38103cd65126236b9e62722"} Dec 05 06:58:58 crc kubenswrapper[4780]: I1205 06:58:58.057652 4780 generic.go:334] "Generic (PLEG): container finished" podID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerID="481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881" exitCode=0 Dec 05 06:58:58 crc kubenswrapper[4780]: I1205 06:58:58.057699 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk5tv" event={"ID":"674412c4-2c0c-4c39-b070-8e16e43a0e16","Type":"ContainerDied","Data":"481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881"} Dec 05 06:58:58 crc kubenswrapper[4780]: I1205 06:58:58.057729 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk5tv" event={"ID":"674412c4-2c0c-4c39-b070-8e16e43a0e16","Type":"ContainerStarted","Data":"1ff9c3d5d6275c8b376b8e4adfd18010cdbbee29392e212ed244faf47f194eab"} Dec 05 06:58:59 crc kubenswrapper[4780]: I1205 06:58:59.065511 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk5tv" event={"ID":"674412c4-2c0c-4c39-b070-8e16e43a0e16","Type":"ContainerStarted","Data":"16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce"} Dec 05 06:58:59 crc kubenswrapper[4780]: I1205 06:58:59.067507 4780 generic.go:334] "Generic (PLEG): container finished" podID="bb460ef0-02da-47c6-81c5-4c6ccc81e705" containerID="e979d6b4674207c386587962f6ab825e3162a72424ee4912cc8efeef425f58fc" exitCode=0 Dec 05 06:58:59 crc kubenswrapper[4780]: I1205 06:58:59.067545 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" event={"ID":"bb460ef0-02da-47c6-81c5-4c6ccc81e705","Type":"ContainerDied","Data":"e979d6b4674207c386587962f6ab825e3162a72424ee4912cc8efeef425f58fc"} Dec 05 06:58:59 crc kubenswrapper[4780]: I1205 06:58:59.908166 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:58:59 crc kubenswrapper[4780]: I1205 06:58:59.908528 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.073718 4780 generic.go:334] "Generic (PLEG): container finished" podID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerID="16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce" exitCode=0 Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.073824 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk5tv" event={"ID":"674412c4-2c0c-4c39-b070-8e16e43a0e16","Type":"ContainerDied","Data":"16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce"} Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.320372 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.474380 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v56zl\" (UniqueName: \"kubernetes.io/projected/bb460ef0-02da-47c6-81c5-4c6ccc81e705-kube-api-access-v56zl\") pod \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.474433 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-bundle\") pod \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.474457 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-util\") pod \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\" (UID: \"bb460ef0-02da-47c6-81c5-4c6ccc81e705\") " Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.486458 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-bundle" (OuterVolumeSpecName: "bundle") pod "bb460ef0-02da-47c6-81c5-4c6ccc81e705" (UID: "bb460ef0-02da-47c6-81c5-4c6ccc81e705"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.494628 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb460ef0-02da-47c6-81c5-4c6ccc81e705-kube-api-access-v56zl" (OuterVolumeSpecName: "kube-api-access-v56zl") pod "bb460ef0-02da-47c6-81c5-4c6ccc81e705" (UID: "bb460ef0-02da-47c6-81c5-4c6ccc81e705"). InnerVolumeSpecName "kube-api-access-v56zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.585645 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v56zl\" (UniqueName: \"kubernetes.io/projected/bb460ef0-02da-47c6-81c5-4c6ccc81e705-kube-api-access-v56zl\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.585679 4780 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.922495 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-util" (OuterVolumeSpecName: "util") pod "bb460ef0-02da-47c6-81c5-4c6ccc81e705" (UID: "bb460ef0-02da-47c6-81c5-4c6ccc81e705"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:59:00 crc kubenswrapper[4780]: I1205 06:59:00.990638 4780 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb460ef0-02da-47c6-81c5-4c6ccc81e705-util\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:01 crc kubenswrapper[4780]: I1205 06:59:01.082654 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk5tv" event={"ID":"674412c4-2c0c-4c39-b070-8e16e43a0e16","Type":"ContainerStarted","Data":"4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f"} Dec 05 06:59:01 crc kubenswrapper[4780]: I1205 06:59:01.085068 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" event={"ID":"bb460ef0-02da-47c6-81c5-4c6ccc81e705","Type":"ContainerDied","Data":"99c57f228b073907973f169ebc22dfbe1a9c7bd0151c9adcf33ac6daf581e19d"} Dec 05 06:59:01 crc kubenswrapper[4780]: I1205 06:59:01.085093 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99c57f228b073907973f169ebc22dfbe1a9c7bd0151c9adcf33ac6daf581e19d" Dec 05 06:59:01 crc kubenswrapper[4780]: I1205 06:59:01.085176 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7" Dec 05 06:59:02 crc kubenswrapper[4780]: I1205 06:59:02.106446 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vk5tv" podStartSLOduration=3.235769622 podStartE2EDuration="6.106432326s" podCreationTimestamp="2025-12-05 06:58:56 +0000 UTC" firstStartedPulling="2025-12-05 06:58:58.061166541 +0000 UTC m=+772.130682913" lastFinishedPulling="2025-12-05 06:59:00.931829255 +0000 UTC m=+775.001345617" observedRunningTime="2025-12-05 06:59:02.103795995 +0000 UTC m=+776.173312337" watchObservedRunningTime="2025-12-05 06:59:02.106432326 +0000 UTC m=+776.175948658" Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.758746 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7"] Dec 05 06:59:04 crc kubenswrapper[4780]: E1205 06:59:04.759331 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb460ef0-02da-47c6-81c5-4c6ccc81e705" containerName="util" Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.759344 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb460ef0-02da-47c6-81c5-4c6ccc81e705" containerName="util" Dec 05 06:59:04 crc kubenswrapper[4780]: E1205 06:59:04.759359 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb460ef0-02da-47c6-81c5-4c6ccc81e705" containerName="extract" Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.759364 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb460ef0-02da-47c6-81c5-4c6ccc81e705" containerName="extract" Dec 05 06:59:04 crc kubenswrapper[4780]: E1205 06:59:04.759378 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb460ef0-02da-47c6-81c5-4c6ccc81e705" containerName="pull" Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.759386 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb460ef0-02da-47c6-81c5-4c6ccc81e705" containerName="pull" Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.759473 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb460ef0-02da-47c6-81c5-4c6ccc81e705" containerName="extract" Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.759836 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7" Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.761732 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.761736 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.762719 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-hnt5r" Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.771259 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7"] Dec 05 06:59:04 crc kubenswrapper[4780]: I1205 06:59:04.935647 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5n4\" (UniqueName: \"kubernetes.io/projected/bf858c19-528a-43ce-bd7c-317f7ad93ac7-kube-api-access-6l5n4\") pod \"nmstate-operator-5b5b58f5c8-g2mb7\" (UID: \"bf858c19-528a-43ce-bd7c-317f7ad93ac7\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7" Dec 05 06:59:05 crc kubenswrapper[4780]: I1205 06:59:05.036711 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5n4\" (UniqueName: \"kubernetes.io/projected/bf858c19-528a-43ce-bd7c-317f7ad93ac7-kube-api-access-6l5n4\") pod \"nmstate-operator-5b5b58f5c8-g2mb7\" (UID: \"bf858c19-528a-43ce-bd7c-317f7ad93ac7\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7" Dec 05 06:59:05 crc kubenswrapper[4780]: I1205 06:59:05.052846 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5n4\" (UniqueName: \"kubernetes.io/projected/bf858c19-528a-43ce-bd7c-317f7ad93ac7-kube-api-access-6l5n4\") pod \"nmstate-operator-5b5b58f5c8-g2mb7\" (UID: \"bf858c19-528a-43ce-bd7c-317f7ad93ac7\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7" Dec 05 06:59:05 crc kubenswrapper[4780]: I1205 06:59:05.090011 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7" Dec 05 06:59:05 crc kubenswrapper[4780]: I1205 06:59:05.274185 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7"] Dec 05 06:59:06 crc kubenswrapper[4780]: I1205 06:59:06.107969 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7" event={"ID":"bf858c19-528a-43ce-bd7c-317f7ad93ac7","Type":"ContainerStarted","Data":"bdd2fd2ab1eb1434a48e21e82474ee44ffcc3b303f82628e8c2d37375638252d"} Dec 05 06:59:07 crc kubenswrapper[4780]: I1205 06:59:07.007097 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:59:07 crc kubenswrapper[4780]: I1205 06:59:07.007499 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:59:07 crc kubenswrapper[4780]: I1205 06:59:07.054109 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:59:07 crc kubenswrapper[4780]: I1205 06:59:07.149744 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:59:08 crc kubenswrapper[4780]: I1205 06:59:08.118223 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7" event={"ID":"bf858c19-528a-43ce-bd7c-317f7ad93ac7","Type":"ContainerStarted","Data":"0a31ffb7d995923cf678e602570d06c9f43b01ee0b03ad941c87c41b01ed2fc5"} Dec 05 06:59:08 crc kubenswrapper[4780]: I1205 06:59:08.134986 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-g2mb7" podStartSLOduration=1.7016244230000002 podStartE2EDuration="4.134969539s" podCreationTimestamp="2025-12-05 06:59:04 +0000 UTC" firstStartedPulling="2025-12-05 06:59:05.288432068 +0000 UTC m=+779.357948400" lastFinishedPulling="2025-12-05 06:59:07.721777184 +0000 UTC m=+781.791293516" observedRunningTime="2025-12-05 06:59:08.131769874 +0000 UTC m=+782.201286206" watchObservedRunningTime="2025-12-05 06:59:08.134969539 +0000 UTC m=+782.204485871" Dec 05 06:59:09 crc kubenswrapper[4780]: I1205 06:59:09.656597 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vk5tv"] Dec 05 06:59:10 crc kubenswrapper[4780]: I1205 06:59:10.127366 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vk5tv" podUID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerName="registry-server" containerID="cri-o://4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f" gracePeriod=2 Dec 05 06:59:11 crc kubenswrapper[4780]: I1205 06:59:11.746278 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:59:11 crc kubenswrapper[4780]: I1205 06:59:11.925834 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5vms\" (UniqueName: \"kubernetes.io/projected/674412c4-2c0c-4c39-b070-8e16e43a0e16-kube-api-access-n5vms\") pod \"674412c4-2c0c-4c39-b070-8e16e43a0e16\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " Dec 05 06:59:11 crc kubenswrapper[4780]: I1205 06:59:11.926039 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-catalog-content\") pod \"674412c4-2c0c-4c39-b070-8e16e43a0e16\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " Dec 05 06:59:11 crc kubenswrapper[4780]: I1205 06:59:11.926062 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-utilities\") pod \"674412c4-2c0c-4c39-b070-8e16e43a0e16\" (UID: \"674412c4-2c0c-4c39-b070-8e16e43a0e16\") " Dec 05 06:59:11 crc kubenswrapper[4780]: I1205 06:59:11.927145 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-utilities" (OuterVolumeSpecName: "utilities") pod "674412c4-2c0c-4c39-b070-8e16e43a0e16" (UID: "674412c4-2c0c-4c39-b070-8e16e43a0e16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:59:11 crc kubenswrapper[4780]: I1205 06:59:11.932597 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674412c4-2c0c-4c39-b070-8e16e43a0e16-kube-api-access-n5vms" (OuterVolumeSpecName: "kube-api-access-n5vms") pod "674412c4-2c0c-4c39-b070-8e16e43a0e16" (UID: "674412c4-2c0c-4c39-b070-8e16e43a0e16"). InnerVolumeSpecName "kube-api-access-n5vms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.026802 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.026832 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5vms\" (UniqueName: \"kubernetes.io/projected/674412c4-2c0c-4c39-b070-8e16e43a0e16-kube-api-access-n5vms\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.048447 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "674412c4-2c0c-4c39-b070-8e16e43a0e16" (UID: "674412c4-2c0c-4c39-b070-8e16e43a0e16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.128147 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/674412c4-2c0c-4c39-b070-8e16e43a0e16-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.140569 4780 generic.go:334] "Generic (PLEG): container finished" podID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerID="4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f" exitCode=0 Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.140659 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vk5tv" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.145037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk5tv" event={"ID":"674412c4-2c0c-4c39-b070-8e16e43a0e16","Type":"ContainerDied","Data":"4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f"} Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.145081 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vk5tv" event={"ID":"674412c4-2c0c-4c39-b070-8e16e43a0e16","Type":"ContainerDied","Data":"1ff9c3d5d6275c8b376b8e4adfd18010cdbbee29392e212ed244faf47f194eab"} Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.145105 4780 scope.go:117] "RemoveContainer" containerID="4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.167361 4780 scope.go:117] "RemoveContainer" containerID="16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.174330 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vk5tv"] Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.185056 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vk5tv"] Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.187081 4780 scope.go:117] "RemoveContainer" containerID="481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.201521 4780 scope.go:117] "RemoveContainer" containerID="4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f" Dec 05 06:59:12 crc kubenswrapper[4780]: E1205 06:59:12.201921 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f\": container with ID starting with 4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f not found: ID does not exist" containerID="4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.201971 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f"} err="failed to get container status \"4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f\": rpc error: code = NotFound desc = could not find container \"4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f\": container with ID starting with 4bb408c0b34b5b05ba39d90054089db6947c5fb1b04ef7826dacc19b5bfa943f not found: ID does not exist" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.202005 4780 scope.go:117] "RemoveContainer" containerID="16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce" Dec 05 06:59:12 crc kubenswrapper[4780]: E1205 06:59:12.202484 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce\": container with ID starting with 16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce not found: ID does not exist" containerID="16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.202590 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce"} err="failed to get container status \"16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce\": rpc error: code = NotFound desc = could not find container \"16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce\": container with ID starting with 16be29c19f967979105b4139508fba618cca8b10a407a6cf6d3aebc8eb6dabce not found: ID does not exist" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.202683 4780 scope.go:117] "RemoveContainer" containerID="481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881" Dec 05 06:59:12 crc kubenswrapper[4780]: E1205 06:59:12.203070 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881\": container with ID starting with 481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881 not found: ID does not exist" containerID="481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881" Dec 05 06:59:12 crc kubenswrapper[4780]: I1205 06:59:12.203135 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881"} err="failed to get container status \"481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881\": rpc error: code = NotFound desc = could not find container \"481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881\": container with ID starting with 481e314a927c8e86c04b51839d5776c5eaf6bdebfdd4e5776f98b36815405881 not found: ID does not exist" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.145818 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="674412c4-2c0c-4c39-b070-8e16e43a0e16" path="/var/lib/kubelet/pods/674412c4-2c0c-4c39-b070-8e16e43a0e16/volumes" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.444708 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk"] Dec 05 06:59:14 crc kubenswrapper[4780]: E1205 06:59:14.445291 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerName="extract-content" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.445308 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerName="extract-content" Dec 05 06:59:14 crc kubenswrapper[4780]: E1205 06:59:14.445326 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerName="extract-utilities" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.445334 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerName="extract-utilities" Dec 05 06:59:14 crc kubenswrapper[4780]: E1205 06:59:14.445342 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerName="registry-server" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.445348 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerName="registry-server" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.445438 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="674412c4-2c0c-4c39-b070-8e16e43a0e16" containerName="registry-server" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.446047 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.449305 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-v2pt6" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.458783 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk"] Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.490807 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx"] Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.491574 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.494061 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.495094 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qgm26"] Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.495836 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.516442 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx"] Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.557142 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdgrr\" (UniqueName: \"kubernetes.io/projected/cdd4dac7-cd36-48d1-af63-2b40434c6e1c-kube-api-access-fdgrr\") pod \"nmstate-metrics-7f946cbc9-j42vk\" (UID: \"cdd4dac7-cd36-48d1-af63-2b40434c6e1c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.571264 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj"] Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.572110 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.574120 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-njr77" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.574677 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.574777 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.620429 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj"] Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.658490 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kxzb\" (UniqueName: \"kubernetes.io/projected/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-kube-api-access-4kxzb\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.658579 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-dbus-socket\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.658710 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7e32be6a-c339-464c-94bb-44a5b3cb3224-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-dx2wx\" (UID: \"7e32be6a-c339-464c-94bb-44a5b3cb3224\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.658849 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-ovs-socket\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.658932 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdgrr\" (UniqueName: \"kubernetes.io/projected/cdd4dac7-cd36-48d1-af63-2b40434c6e1c-kube-api-access-fdgrr\") pod \"nmstate-metrics-7f946cbc9-j42vk\" (UID: \"cdd4dac7-cd36-48d1-af63-2b40434c6e1c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.658953 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-nmstate-lock\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.658983 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lppch\" (UniqueName: \"kubernetes.io/projected/7e32be6a-c339-464c-94bb-44a5b3cb3224-kube-api-access-lppch\") pod \"nmstate-webhook-5f6d4c5ccb-dx2wx\" (UID: \"7e32be6a-c339-464c-94bb-44a5b3cb3224\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.680583 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdgrr\" (UniqueName: \"kubernetes.io/projected/cdd4dac7-cd36-48d1-af63-2b40434c6e1c-kube-api-access-fdgrr\") pod \"nmstate-metrics-7f946cbc9-j42vk\" (UID: \"cdd4dac7-cd36-48d1-af63-2b40434c6e1c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.759730 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.759859 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-dbus-socket\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.759909 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7e32be6a-c339-464c-94bb-44a5b3cb3224-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-dx2wx\" (UID: \"7e32be6a-c339-464c-94bb-44a5b3cb3224\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.759940 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-ovs-socket\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.759965 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-nmstate-lock\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.759991 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dlsr\" (UniqueName: \"kubernetes.io/projected/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-kube-api-access-6dlsr\") pod \"nmstate-console-plugin-7fbb5f6569-tlmcj\" (UID: \"d169bdac-5bc1-4dff-9e50-d78c7eff7c37\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.760020 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-tlmcj\" (UID: \"d169bdac-5bc1-4dff-9e50-d78c7eff7c37\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.760039 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lppch\" (UniqueName: \"kubernetes.io/projected/7e32be6a-c339-464c-94bb-44a5b3cb3224-kube-api-access-lppch\") pod \"nmstate-webhook-5f6d4c5ccb-dx2wx\" (UID: \"7e32be6a-c339-464c-94bb-44a5b3cb3224\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.760059 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kxzb\" (UniqueName: \"kubernetes.io/projected/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-kube-api-access-4kxzb\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.760061 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-ovs-socket\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.760078 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-tlmcj\" (UID: \"d169bdac-5bc1-4dff-9e50-d78c7eff7c37\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.760065 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-nmstate-lock\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.760159 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-dbus-socket\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.763593 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7e32be6a-c339-464c-94bb-44a5b3cb3224-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-dx2wx\" (UID: \"7e32be6a-c339-464c-94bb-44a5b3cb3224\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.764629 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55dbd56b55-rx5qz"] Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.765239 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.785418 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55dbd56b55-rx5qz"] Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.792788 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lppch\" (UniqueName: \"kubernetes.io/projected/7e32be6a-c339-464c-94bb-44a5b3cb3224-kube-api-access-lppch\") pod \"nmstate-webhook-5f6d4c5ccb-dx2wx\" (UID: \"7e32be6a-c339-464c-94bb-44a5b3cb3224\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.795843 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kxzb\" (UniqueName: \"kubernetes.io/projected/6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f-kube-api-access-4kxzb\") pod \"nmstate-handler-qgm26\" (UID: \"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f\") " pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.807491 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.822494 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.861052 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-trusted-ca-bundle\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.861086 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-oauth-serving-cert\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.861115 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srknn\" (UniqueName: \"kubernetes.io/projected/115f94e5-55a6-4ac8-8916-112641d9e17d-kube-api-access-srknn\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.861142 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dlsr\" (UniqueName: \"kubernetes.io/projected/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-kube-api-access-6dlsr\") pod \"nmstate-console-plugin-7fbb5f6569-tlmcj\" (UID: \"d169bdac-5bc1-4dff-9e50-d78c7eff7c37\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.861165 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-tlmcj\" (UID: \"d169bdac-5bc1-4dff-9e50-d78c7eff7c37\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.861183 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-console-config\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.861207 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-tlmcj\" (UID: \"d169bdac-5bc1-4dff-9e50-d78c7eff7c37\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.861228 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-service-ca\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.861248 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/115f94e5-55a6-4ac8-8916-112641d9e17d-console-serving-cert\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.861271 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/115f94e5-55a6-4ac8-8916-112641d9e17d-console-oauth-config\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: E1205 06:59:14.861403 4780 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 05 06:59:14 crc kubenswrapper[4780]: E1205 06:59:14.861452 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-plugin-serving-cert podName:d169bdac-5bc1-4dff-9e50-d78c7eff7c37 nodeName:}" failed. No retries permitted until 2025-12-05 06:59:15.361431462 +0000 UTC m=+789.430947794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-tlmcj" (UID: "d169bdac-5bc1-4dff-9e50-d78c7eff7c37") : secret "plugin-serving-cert" not found Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.862022 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-tlmcj\" (UID: \"d169bdac-5bc1-4dff-9e50-d78c7eff7c37\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.880667 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dlsr\" (UniqueName: \"kubernetes.io/projected/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-kube-api-access-6dlsr\") pod \"nmstate-console-plugin-7fbb5f6569-tlmcj\" (UID: \"d169bdac-5bc1-4dff-9e50-d78c7eff7c37\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.961952 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-console-config\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.962032 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-service-ca\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.962063 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/115f94e5-55a6-4ac8-8916-112641d9e17d-console-serving-cert\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.962096 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/115f94e5-55a6-4ac8-8916-112641d9e17d-console-oauth-config\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.962137 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-trusted-ca-bundle\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.962163 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-oauth-serving-cert\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.962197 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srknn\" (UniqueName: \"kubernetes.io/projected/115f94e5-55a6-4ac8-8916-112641d9e17d-kube-api-access-srknn\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.963047 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-console-config\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.963117 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-service-ca\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.963671 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-oauth-serving-cert\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.966292 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/115f94e5-55a6-4ac8-8916-112641d9e17d-trusted-ca-bundle\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.966404 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/115f94e5-55a6-4ac8-8916-112641d9e17d-console-oauth-config\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.966953 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/115f94e5-55a6-4ac8-8916-112641d9e17d-console-serving-cert\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:14 crc kubenswrapper[4780]: I1205 06:59:14.979030 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srknn\" (UniqueName: \"kubernetes.io/projected/115f94e5-55a6-4ac8-8916-112641d9e17d-kube-api-access-srknn\") pod \"console-55dbd56b55-rx5qz\" (UID: \"115f94e5-55a6-4ac8-8916-112641d9e17d\") " pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.026353 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk"] Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.066533 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx"] Dec 05 06:59:15 crc kubenswrapper[4780]: W1205 06:59:15.070138 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e32be6a_c339_464c_94bb_44a5b3cb3224.slice/crio-1cbfb4bc5f947cb15c22e977362ddf1656e9d8435b4a1f473f3ea70c7babdbeb WatchSource:0}: Error finding container 1cbfb4bc5f947cb15c22e977362ddf1656e9d8435b4a1f473f3ea70c7babdbeb: Status 404 returned error can't find the container with id 1cbfb4bc5f947cb15c22e977362ddf1656e9d8435b4a1f473f3ea70c7babdbeb Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.159412 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.165588 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" event={"ID":"7e32be6a-c339-464c-94bb-44a5b3cb3224","Type":"ContainerStarted","Data":"1cbfb4bc5f947cb15c22e977362ddf1656e9d8435b4a1f473f3ea70c7babdbeb"} Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.166760 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qgm26" event={"ID":"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f","Type":"ContainerStarted","Data":"cbb0c3f51971eb703c78c55dd95c7d83cc0b257ea852147e9300d2f3219d9a13"} Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.167444 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk" event={"ID":"cdd4dac7-cd36-48d1-af63-2b40434c6e1c","Type":"ContainerStarted","Data":"f923b4ff094c8f1141ff60859fca835f8348f9198afccba239d512b14c3addd5"} Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.326924 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55dbd56b55-rx5qz"] Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.367250 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-tlmcj\" (UID: \"d169bdac-5bc1-4dff-9e50-d78c7eff7c37\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.371019 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d169bdac-5bc1-4dff-9e50-d78c7eff7c37-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-tlmcj\" (UID: \"d169bdac-5bc1-4dff-9e50-d78c7eff7c37\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.521095 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" Dec 05 06:59:15 crc kubenswrapper[4780]: I1205 06:59:15.893076 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj"] Dec 05 06:59:16 crc kubenswrapper[4780]: I1205 06:59:16.172862 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" event={"ID":"d169bdac-5bc1-4dff-9e50-d78c7eff7c37","Type":"ContainerStarted","Data":"eaa3ab6c52b02654abf868b51b8d5ac3ffbb55420fe76ce093c92ec5c5c58aa4"} Dec 05 06:59:16 crc kubenswrapper[4780]: I1205 06:59:16.174859 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55dbd56b55-rx5qz" event={"ID":"115f94e5-55a6-4ac8-8916-112641d9e17d","Type":"ContainerStarted","Data":"679bda673a2c057f62c3eed6a6a8db0bb9e65535fcc2fa9b86cac2ca35e5c362"} Dec 05 06:59:16 crc kubenswrapper[4780]: I1205 06:59:16.174920 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55dbd56b55-rx5qz" event={"ID":"115f94e5-55a6-4ac8-8916-112641d9e17d","Type":"ContainerStarted","Data":"d94c8b239cc9cdfe19b7390fce3fef5357dc465788ec3210c59e4184c1209a3b"} Dec 05 06:59:16 crc kubenswrapper[4780]: I1205 06:59:16.226511 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55dbd56b55-rx5qz" podStartSLOduration=2.226486985 podStartE2EDuration="2.226486985s" podCreationTimestamp="2025-12-05 06:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 06:59:16.218312086 +0000 UTC m=+790.287828418" watchObservedRunningTime="2025-12-05 06:59:16.226486985 +0000 UTC m=+790.296003307" Dec 05 06:59:18 crc kubenswrapper[4780]: I1205 06:59:18.186206 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk" event={"ID":"cdd4dac7-cd36-48d1-af63-2b40434c6e1c","Type":"ContainerStarted","Data":"f5e72cd6562ccfcf5dcb19a22a8cb8784c659b99ea3593b5684c3721c7218796"} Dec 05 06:59:18 crc kubenswrapper[4780]: I1205 06:59:18.187861 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" event={"ID":"7e32be6a-c339-464c-94bb-44a5b3cb3224","Type":"ContainerStarted","Data":"94622c74c9d1325c9373a759d62e0ca334c93695b70e9afd5c43e5e88bdcfaa7"} Dec 05 06:59:18 crc kubenswrapper[4780]: I1205 06:59:18.187949 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" Dec 05 06:59:18 crc kubenswrapper[4780]: I1205 06:59:18.189710 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qgm26" event={"ID":"6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f","Type":"ContainerStarted","Data":"7f9e99643d2006e2377b9840c7a33993aa96ebe600fedcd96339106a4a201e43"} Dec 05 06:59:18 crc kubenswrapper[4780]: I1205 06:59:18.190386 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:18 crc kubenswrapper[4780]: I1205 06:59:18.211003 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" podStartSLOduration=1.510659666 podStartE2EDuration="4.210975314s" podCreationTimestamp="2025-12-05 06:59:14 +0000 UTC" firstStartedPulling="2025-12-05 06:59:15.072282399 +0000 UTC m=+789.141798731" lastFinishedPulling="2025-12-05 06:59:17.772598047 +0000 UTC m=+791.842114379" observedRunningTime="2025-12-05 06:59:18.202602031 +0000 UTC m=+792.272118363" watchObservedRunningTime="2025-12-05 06:59:18.210975314 +0000 UTC m=+792.280491646" Dec 05 06:59:18 crc kubenswrapper[4780]: I1205 06:59:18.227980 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qgm26" podStartSLOduration=1.34557013 podStartE2EDuration="4.227950617s" podCreationTimestamp="2025-12-05 06:59:14 +0000 UTC" firstStartedPulling="2025-12-05 06:59:14.860060136 +0000 UTC m=+788.929576468" lastFinishedPulling="2025-12-05 06:59:17.742440623 +0000 UTC m=+791.811956955" observedRunningTime="2025-12-05 06:59:18.222563074 +0000 UTC m=+792.292079406" watchObservedRunningTime="2025-12-05 06:59:18.227950617 +0000 UTC m=+792.297466949" Dec 05 06:59:20 crc kubenswrapper[4780]: I1205 06:59:20.205020 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" event={"ID":"d169bdac-5bc1-4dff-9e50-d78c7eff7c37","Type":"ContainerStarted","Data":"a77e430c17b79f57b6e1bdca45ad7398728050e1f1876e76013b2b9ebc4f15b1"} Dec 05 06:59:21 crc kubenswrapper[4780]: I1205 06:59:21.235479 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tlmcj" podStartSLOduration=4.27492693 podStartE2EDuration="7.235456903s" podCreationTimestamp="2025-12-05 06:59:14 +0000 UTC" firstStartedPulling="2025-12-05 06:59:15.906162738 +0000 UTC m=+789.975679060" lastFinishedPulling="2025-12-05 06:59:18.866692681 +0000 UTC m=+792.936209033" observedRunningTime="2025-12-05 06:59:21.226222377 +0000 UTC m=+795.295738709" watchObservedRunningTime="2025-12-05 06:59:21.235456903 +0000 UTC m=+795.304973235" Dec 05 06:59:24 crc kubenswrapper[4780]: I1205 06:59:24.239767 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk" event={"ID":"cdd4dac7-cd36-48d1-af63-2b40434c6e1c","Type":"ContainerStarted","Data":"1ca76872834204314ac061f537af5a1a0cf85cc1d6823d41b3dfa8522fce1c07"} Dec 05 06:59:24 crc kubenswrapper[4780]: I1205 06:59:24.260484 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-j42vk" podStartSLOduration=1.598875289 podStartE2EDuration="10.260462746s" podCreationTimestamp="2025-12-05 06:59:14 +0000 UTC" firstStartedPulling="2025-12-05 06:59:15.0419834 +0000 UTC m=+789.111499732" lastFinishedPulling="2025-12-05 06:59:23.703570857 +0000 UTC m=+797.773087189" observedRunningTime="2025-12-05 06:59:24.255073902 +0000 UTC m=+798.324590234" watchObservedRunningTime="2025-12-05 06:59:24.260462746 +0000 UTC m=+798.329979078" Dec 05 06:59:24 crc kubenswrapper[4780]: I1205 06:59:24.842315 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qgm26" Dec 05 06:59:25 crc kubenswrapper[4780]: I1205 06:59:25.160252 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:25 crc kubenswrapper[4780]: I1205 06:59:25.160311 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:25 crc kubenswrapper[4780]: I1205 06:59:25.165323 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:25 crc kubenswrapper[4780]: I1205 06:59:25.249349 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55dbd56b55-rx5qz" Dec 05 06:59:25 crc kubenswrapper[4780]: I1205 06:59:25.308455 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mw286"] Dec 05 06:59:29 crc kubenswrapper[4780]: I1205 06:59:29.907992 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:59:29 crc kubenswrapper[4780]: I1205 06:59:29.908334 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:59:34 crc kubenswrapper[4780]: I1205 06:59:34.857472 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dx2wx" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.842060 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk"] Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.843728 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.845406 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.858504 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk"] Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.861793 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn2pc\" (UniqueName: \"kubernetes.io/projected/978f8e4f-0b0b-4604-a233-6c85dd81376b-kube-api-access-cn2pc\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.861929 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.862037 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.962483 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn2pc\" (UniqueName: \"kubernetes.io/projected/978f8e4f-0b0b-4604-a233-6c85dd81376b-kube-api-access-cn2pc\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.962744 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.962835 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.963275 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.963363 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:45 crc kubenswrapper[4780]: I1205 06:59:45.980084 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn2pc\" (UniqueName: \"kubernetes.io/projected/978f8e4f-0b0b-4604-a233-6c85dd81376b-kube-api-access-cn2pc\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:46 crc kubenswrapper[4780]: I1205 06:59:46.161017 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:46 crc kubenswrapper[4780]: I1205 06:59:46.333083 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk"] Dec 05 06:59:46 crc kubenswrapper[4780]: I1205 06:59:46.378320 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" event={"ID":"978f8e4f-0b0b-4604-a233-6c85dd81376b","Type":"ContainerStarted","Data":"dd38304ee3c946b92e70341e57f7d89e086ba8a878124e5a2ce6571d50acd874"} Dec 05 06:59:48 crc kubenswrapper[4780]: I1205 06:59:48.395281 4780 generic.go:334] "Generic (PLEG): container finished" podID="978f8e4f-0b0b-4604-a233-6c85dd81376b" containerID="c92c1304ae2919098b50ece994873bfb4b3ebdf675da261545e071e722648c82" exitCode=0 Dec 05 06:59:48 crc kubenswrapper[4780]: I1205 06:59:48.395343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" event={"ID":"978f8e4f-0b0b-4604-a233-6c85dd81376b","Type":"ContainerDied","Data":"c92c1304ae2919098b50ece994873bfb4b3ebdf675da261545e071e722648c82"} Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.348114 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-mw286" podUID="7861d984-72f7-44e0-8d42-fb04a7d2000e" containerName="console" containerID="cri-o://bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582" gracePeriod=15 Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.408226 4780 generic.go:334] "Generic (PLEG): container finished" podID="978f8e4f-0b0b-4604-a233-6c85dd81376b" containerID="e9fbc4fd9f5a44dea31b212a80aeb6700dbe028f229d3ce7fd24758afd46f24b" exitCode=0 Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.408271 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" event={"ID":"978f8e4f-0b0b-4604-a233-6c85dd81376b","Type":"ContainerDied","Data":"e9fbc4fd9f5a44dea31b212a80aeb6700dbe028f229d3ce7fd24758afd46f24b"} Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.752340 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mw286_7861d984-72f7-44e0-8d42-fb04a7d2000e/console/0.log" Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.752620 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.930485 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-oauth-serving-cert\") pod \"7861d984-72f7-44e0-8d42-fb04a7d2000e\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.930539 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqlc8\" (UniqueName: \"kubernetes.io/projected/7861d984-72f7-44e0-8d42-fb04a7d2000e-kube-api-access-mqlc8\") pod \"7861d984-72f7-44e0-8d42-fb04a7d2000e\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.930560 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-serving-cert\") pod \"7861d984-72f7-44e0-8d42-fb04a7d2000e\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.930597 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-config\") pod \"7861d984-72f7-44e0-8d42-fb04a7d2000e\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.930662 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-trusted-ca-bundle\") pod \"7861d984-72f7-44e0-8d42-fb04a7d2000e\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.930706 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-oauth-config\") pod \"7861d984-72f7-44e0-8d42-fb04a7d2000e\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.930723 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-service-ca\") pod \"7861d984-72f7-44e0-8d42-fb04a7d2000e\" (UID: \"7861d984-72f7-44e0-8d42-fb04a7d2000e\") " Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.931449 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7861d984-72f7-44e0-8d42-fb04a7d2000e" (UID: "7861d984-72f7-44e0-8d42-fb04a7d2000e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.931632 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7861d984-72f7-44e0-8d42-fb04a7d2000e" (UID: "7861d984-72f7-44e0-8d42-fb04a7d2000e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.931837 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-config" (OuterVolumeSpecName: "console-config") pod "7861d984-72f7-44e0-8d42-fb04a7d2000e" (UID: "7861d984-72f7-44e0-8d42-fb04a7d2000e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.931953 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-service-ca" (OuterVolumeSpecName: "service-ca") pod "7861d984-72f7-44e0-8d42-fb04a7d2000e" (UID: "7861d984-72f7-44e0-8d42-fb04a7d2000e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.936214 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7861d984-72f7-44e0-8d42-fb04a7d2000e" (UID: "7861d984-72f7-44e0-8d42-fb04a7d2000e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.937059 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7861d984-72f7-44e0-8d42-fb04a7d2000e-kube-api-access-mqlc8" (OuterVolumeSpecName: "kube-api-access-mqlc8") pod "7861d984-72f7-44e0-8d42-fb04a7d2000e" (UID: "7861d984-72f7-44e0-8d42-fb04a7d2000e"). InnerVolumeSpecName "kube-api-access-mqlc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:59:50 crc kubenswrapper[4780]: I1205 06:59:50.937137 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7861d984-72f7-44e0-8d42-fb04a7d2000e" (UID: "7861d984-72f7-44e0-8d42-fb04a7d2000e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.031613 4780 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.031655 4780 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.031664 4780 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.031673 4780 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.031681 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqlc8\" (UniqueName: \"kubernetes.io/projected/7861d984-72f7-44e0-8d42-fb04a7d2000e-kube-api-access-mqlc8\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.031690 4780 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.031698 4780 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7861d984-72f7-44e0-8d42-fb04a7d2000e-console-config\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.416566 4780 generic.go:334] "Generic (PLEG): container finished" podID="978f8e4f-0b0b-4604-a233-6c85dd81376b" containerID="220c884188d68d22920a652d395913f3c52c1bd66c45f0db3c6e884e1c36325c" exitCode=0 Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.416641 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" event={"ID":"978f8e4f-0b0b-4604-a233-6c85dd81376b","Type":"ContainerDied","Data":"220c884188d68d22920a652d395913f3c52c1bd66c45f0db3c6e884e1c36325c"} Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.419936 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mw286_7861d984-72f7-44e0-8d42-fb04a7d2000e/console/0.log" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.419981 4780 generic.go:334] "Generic (PLEG): container finished" podID="7861d984-72f7-44e0-8d42-fb04a7d2000e" containerID="bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582" exitCode=2 Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.420006 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mw286" event={"ID":"7861d984-72f7-44e0-8d42-fb04a7d2000e","Type":"ContainerDied","Data":"bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582"} Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.420027 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mw286" event={"ID":"7861d984-72f7-44e0-8d42-fb04a7d2000e","Type":"ContainerDied","Data":"48b6ae70cbeb22d56011bc4d60adb6e04f81000c29e5d59df32493d0465e0dde"} Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.420042 4780 scope.go:117] "RemoveContainer" containerID="bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.420141 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mw286" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.440501 4780 scope.go:117] "RemoveContainer" containerID="bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582" Dec 05 06:59:51 crc kubenswrapper[4780]: E1205 06:59:51.441004 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582\": container with ID starting with bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582 not found: ID does not exist" containerID="bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.441044 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582"} err="failed to get container status \"bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582\": rpc error: code = NotFound desc = could not find container \"bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582\": container with ID starting with bf92e49acc08a5d0f8bd61421b7b40ea6bd775070ccd48b20090cfb177f63582 not found: ID does not exist" Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.455808 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mw286"] Dec 05 06:59:51 crc kubenswrapper[4780]: I1205 06:59:51.461244 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-mw286"] Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.148898 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7861d984-72f7-44e0-8d42-fb04a7d2000e" path="/var/lib/kubelet/pods/7861d984-72f7-44e0-8d42-fb04a7d2000e/volumes" Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.692517 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.859637 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn2pc\" (UniqueName: \"kubernetes.io/projected/978f8e4f-0b0b-4604-a233-6c85dd81376b-kube-api-access-cn2pc\") pod \"978f8e4f-0b0b-4604-a233-6c85dd81376b\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.859741 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-util\") pod \"978f8e4f-0b0b-4604-a233-6c85dd81376b\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.859806 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-bundle\") pod \"978f8e4f-0b0b-4604-a233-6c85dd81376b\" (UID: \"978f8e4f-0b0b-4604-a233-6c85dd81376b\") " Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.862225 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-bundle" (OuterVolumeSpecName: "bundle") pod "978f8e4f-0b0b-4604-a233-6c85dd81376b" (UID: "978f8e4f-0b0b-4604-a233-6c85dd81376b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.864248 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/978f8e4f-0b0b-4604-a233-6c85dd81376b-kube-api-access-cn2pc" (OuterVolumeSpecName: "kube-api-access-cn2pc") pod "978f8e4f-0b0b-4604-a233-6c85dd81376b" (UID: "978f8e4f-0b0b-4604-a233-6c85dd81376b"). InnerVolumeSpecName "kube-api-access-cn2pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.882241 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-util" (OuterVolumeSpecName: "util") pod "978f8e4f-0b0b-4604-a233-6c85dd81376b" (UID: "978f8e4f-0b0b-4604-a233-6c85dd81376b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.961496 4780 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-util\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.961536 4780 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/978f8e4f-0b0b-4604-a233-6c85dd81376b-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:52 crc kubenswrapper[4780]: I1205 06:59:52.961550 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn2pc\" (UniqueName: \"kubernetes.io/projected/978f8e4f-0b0b-4604-a233-6c85dd81376b-kube-api-access-cn2pc\") on node \"crc\" DevicePath \"\"" Dec 05 06:59:53 crc kubenswrapper[4780]: I1205 06:59:53.434904 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" event={"ID":"978f8e4f-0b0b-4604-a233-6c85dd81376b","Type":"ContainerDied","Data":"dd38304ee3c946b92e70341e57f7d89e086ba8a878124e5a2ce6571d50acd874"} Dec 05 06:59:53 crc kubenswrapper[4780]: I1205 06:59:53.434950 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd38304ee3c946b92e70341e57f7d89e086ba8a878124e5a2ce6571d50acd874" Dec 05 06:59:53 crc kubenswrapper[4780]: I1205 06:59:53.435006 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk" Dec 05 06:59:59 crc kubenswrapper[4780]: I1205 06:59:59.908267 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 06:59:59 crc kubenswrapper[4780]: I1205 06:59:59.908830 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 06:59:59 crc kubenswrapper[4780]: I1205 06:59:59.908903 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 06:59:59 crc kubenswrapper[4780]: I1205 06:59:59.909448 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2807931e09acf8b42ad9790918ef7a86372682995b57dd8fe1ed2240e7e7343f"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 06:59:59 crc kubenswrapper[4780]: I1205 06:59:59.909511 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://2807931e09acf8b42ad9790918ef7a86372682995b57dd8fe1ed2240e7e7343f" gracePeriod=600 Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.144748 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh"] Dec 05 07:00:00 crc kubenswrapper[4780]: E1205 07:00:00.145233 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7861d984-72f7-44e0-8d42-fb04a7d2000e" containerName="console" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.145254 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7861d984-72f7-44e0-8d42-fb04a7d2000e" containerName="console" Dec 05 07:00:00 crc kubenswrapper[4780]: E1205 07:00:00.145268 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978f8e4f-0b0b-4604-a233-6c85dd81376b" containerName="pull" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.145276 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="978f8e4f-0b0b-4604-a233-6c85dd81376b" containerName="pull" Dec 05 07:00:00 crc kubenswrapper[4780]: E1205 07:00:00.145294 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978f8e4f-0b0b-4604-a233-6c85dd81376b" containerName="extract" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.145302 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="978f8e4f-0b0b-4604-a233-6c85dd81376b" containerName="extract" Dec 05 07:00:00 crc kubenswrapper[4780]: E1205 07:00:00.145317 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="978f8e4f-0b0b-4604-a233-6c85dd81376b" containerName="util" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.145325 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="978f8e4f-0b0b-4604-a233-6c85dd81376b" containerName="util" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.145450 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7861d984-72f7-44e0-8d42-fb04a7d2000e" containerName="console" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.145469 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="978f8e4f-0b0b-4604-a233-6c85dd81376b" containerName="extract" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.145936 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.147971 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.149008 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.154165 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh"] Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.245623 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-config-volume\") pod \"collect-profiles-29415300-v98mh\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.245695 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmxdd\" (UniqueName: \"kubernetes.io/projected/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-kube-api-access-pmxdd\") pod \"collect-profiles-29415300-v98mh\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.245766 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-secret-volume\") pod \"collect-profiles-29415300-v98mh\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.346694 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmxdd\" (UniqueName: \"kubernetes.io/projected/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-kube-api-access-pmxdd\") pod \"collect-profiles-29415300-v98mh\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.346779 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-secret-volume\") pod \"collect-profiles-29415300-v98mh\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.346843 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-config-volume\") pod \"collect-profiles-29415300-v98mh\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.347968 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-config-volume\") pod \"collect-profiles-29415300-v98mh\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.362743 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-secret-volume\") pod \"collect-profiles-29415300-v98mh\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.366560 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmxdd\" (UniqueName: \"kubernetes.io/projected/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-kube-api-access-pmxdd\") pod \"collect-profiles-29415300-v98mh\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.463373 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.469849 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="2807931e09acf8b42ad9790918ef7a86372682995b57dd8fe1ed2240e7e7343f" exitCode=0 Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.469906 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"2807931e09acf8b42ad9790918ef7a86372682995b57dd8fe1ed2240e7e7343f"} Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.469945 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"2cffc0fbafe881f6d1cc6fb53dc07f8d5a3aeb1ce491c38fa67b6155bb864e41"} Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.469961 4780 scope.go:117] "RemoveContainer" containerID="212dabc0c5b619fa2c547e0f981407952254f9ee32e03086b047d425e50bb10a" Dec 05 07:00:00 crc kubenswrapper[4780]: I1205 07:00:00.861892 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh"] Dec 05 07:00:00 crc kubenswrapper[4780]: W1205 07:00:00.872610 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5ea633e_b2ae_4ba0_87d7_bdcdf5dd9d03.slice/crio-376b1f43247381f0dd9f4fe8fb79a2a2dba0018c7a4b48ad9daaca1b0f6ef764 WatchSource:0}: Error finding container 376b1f43247381f0dd9f4fe8fb79a2a2dba0018c7a4b48ad9daaca1b0f6ef764: Status 404 returned error can't find the container with id 376b1f43247381f0dd9f4fe8fb79a2a2dba0018c7a4b48ad9daaca1b0f6ef764 Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.185355 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv"] Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.186254 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.190658 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.190822 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.191801 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.192068 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.192394 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lx57x" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.211579 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv"] Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.362115 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f65804ab-3d85-427c-9143-5092175e82f9-webhook-cert\") pod \"metallb-operator-controller-manager-544c44bb58-hzmkv\" (UID: \"f65804ab-3d85-427c-9143-5092175e82f9\") " pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.362435 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkpth\" (UniqueName: \"kubernetes.io/projected/f65804ab-3d85-427c-9143-5092175e82f9-kube-api-access-kkpth\") pod \"metallb-operator-controller-manager-544c44bb58-hzmkv\" (UID: \"f65804ab-3d85-427c-9143-5092175e82f9\") " pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.362524 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f65804ab-3d85-427c-9143-5092175e82f9-apiservice-cert\") pod \"metallb-operator-controller-manager-544c44bb58-hzmkv\" (UID: \"f65804ab-3d85-427c-9143-5092175e82f9\") " pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.463451 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f65804ab-3d85-427c-9143-5092175e82f9-webhook-cert\") pod \"metallb-operator-controller-manager-544c44bb58-hzmkv\" (UID: \"f65804ab-3d85-427c-9143-5092175e82f9\") " pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.464180 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkpth\" (UniqueName: \"kubernetes.io/projected/f65804ab-3d85-427c-9143-5092175e82f9-kube-api-access-kkpth\") pod \"metallb-operator-controller-manager-544c44bb58-hzmkv\" (UID: \"f65804ab-3d85-427c-9143-5092175e82f9\") " pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.464324 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f65804ab-3d85-427c-9143-5092175e82f9-apiservice-cert\") pod \"metallb-operator-controller-manager-544c44bb58-hzmkv\" (UID: \"f65804ab-3d85-427c-9143-5092175e82f9\") " pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.469414 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f65804ab-3d85-427c-9143-5092175e82f9-webhook-cert\") pod \"metallb-operator-controller-manager-544c44bb58-hzmkv\" (UID: \"f65804ab-3d85-427c-9143-5092175e82f9\") " pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.469490 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f65804ab-3d85-427c-9143-5092175e82f9-apiservice-cert\") pod \"metallb-operator-controller-manager-544c44bb58-hzmkv\" (UID: \"f65804ab-3d85-427c-9143-5092175e82f9\") " pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.480170 4780 generic.go:334] "Generic (PLEG): container finished" podID="a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03" containerID="6d4ee30a0f4f08d9ef1f387ef1c173cc26a5bf177568b67b91cc135bb419e4d1" exitCode=0 Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.480217 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" event={"ID":"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03","Type":"ContainerDied","Data":"6d4ee30a0f4f08d9ef1f387ef1c173cc26a5bf177568b67b91cc135bb419e4d1"} Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.480248 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" event={"ID":"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03","Type":"ContainerStarted","Data":"376b1f43247381f0dd9f4fe8fb79a2a2dba0018c7a4b48ad9daaca1b0f6ef764"} Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.488465 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkpth\" (UniqueName: \"kubernetes.io/projected/f65804ab-3d85-427c-9143-5092175e82f9-kube-api-access-kkpth\") pod \"metallb-operator-controller-manager-544c44bb58-hzmkv\" (UID: \"f65804ab-3d85-427c-9143-5092175e82f9\") " pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.502965 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.559348 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz"] Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.560220 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.562140 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ct8nf" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.562406 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.563668 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.586004 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz"] Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.668648 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/00f67bb2-0ac4-4e3c-b17c-733b12b5fde8-apiservice-cert\") pod \"metallb-operator-webhook-server-7d695447b7-xktbz\" (UID: \"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8\") " pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.668731 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00f67bb2-0ac4-4e3c-b17c-733b12b5fde8-webhook-cert\") pod \"metallb-operator-webhook-server-7d695447b7-xktbz\" (UID: \"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8\") " pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.668756 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg62v\" (UniqueName: \"kubernetes.io/projected/00f67bb2-0ac4-4e3c-b17c-733b12b5fde8-kube-api-access-fg62v\") pod \"metallb-operator-webhook-server-7d695447b7-xktbz\" (UID: \"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8\") " pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.772011 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/00f67bb2-0ac4-4e3c-b17c-733b12b5fde8-apiservice-cert\") pod \"metallb-operator-webhook-server-7d695447b7-xktbz\" (UID: \"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8\") " pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.772615 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00f67bb2-0ac4-4e3c-b17c-733b12b5fde8-webhook-cert\") pod \"metallb-operator-webhook-server-7d695447b7-xktbz\" (UID: \"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8\") " pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.772652 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg62v\" (UniqueName: \"kubernetes.io/projected/00f67bb2-0ac4-4e3c-b17c-733b12b5fde8-kube-api-access-fg62v\") pod \"metallb-operator-webhook-server-7d695447b7-xktbz\" (UID: \"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8\") " pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.782348 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/00f67bb2-0ac4-4e3c-b17c-733b12b5fde8-apiservice-cert\") pod \"metallb-operator-webhook-server-7d695447b7-xktbz\" (UID: \"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8\") " pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.790866 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00f67bb2-0ac4-4e3c-b17c-733b12b5fde8-webhook-cert\") pod \"metallb-operator-webhook-server-7d695447b7-xktbz\" (UID: \"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8\") " pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.795380 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg62v\" (UniqueName: \"kubernetes.io/projected/00f67bb2-0ac4-4e3c-b17c-733b12b5fde8-kube-api-access-fg62v\") pod \"metallb-operator-webhook-server-7d695447b7-xktbz\" (UID: \"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8\") " pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:01 crc kubenswrapper[4780]: I1205 07:00:01.881008 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.023585 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv"] Dec 05 07:00:02 crc kubenswrapper[4780]: W1205 07:00:02.028832 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf65804ab_3d85_427c_9143_5092175e82f9.slice/crio-36cd166b322b42a1ffa7b4861ea50fd7a325432eac51d4d3c4633c387e9e5bc1 WatchSource:0}: Error finding container 36cd166b322b42a1ffa7b4861ea50fd7a325432eac51d4d3c4633c387e9e5bc1: Status 404 returned error can't find the container with id 36cd166b322b42a1ffa7b4861ea50fd7a325432eac51d4d3c4633c387e9e5bc1 Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.110800 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz"] Dec 05 07:00:02 crc kubenswrapper[4780]: W1205 07:00:02.115773 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00f67bb2_0ac4_4e3c_b17c_733b12b5fde8.slice/crio-2b53bafc1d1d74cb68a7e4352cbe071d035e0dc544d79e0bc2636c5c5b1ffe24 WatchSource:0}: Error finding container 2b53bafc1d1d74cb68a7e4352cbe071d035e0dc544d79e0bc2636c5c5b1ffe24: Status 404 returned error can't find the container with id 2b53bafc1d1d74cb68a7e4352cbe071d035e0dc544d79e0bc2636c5c5b1ffe24 Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.487219 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" event={"ID":"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8","Type":"ContainerStarted","Data":"2b53bafc1d1d74cb68a7e4352cbe071d035e0dc544d79e0bc2636c5c5b1ffe24"} Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.488356 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" event={"ID":"f65804ab-3d85-427c-9143-5092175e82f9","Type":"ContainerStarted","Data":"36cd166b322b42a1ffa7b4861ea50fd7a325432eac51d4d3c4633c387e9e5bc1"} Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.707822 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.790629 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmxdd\" (UniqueName: \"kubernetes.io/projected/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-kube-api-access-pmxdd\") pod \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.790696 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-config-volume\") pod \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.790773 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-secret-volume\") pod \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\" (UID: \"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03\") " Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.791716 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-config-volume" (OuterVolumeSpecName: "config-volume") pod "a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03" (UID: "a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.795723 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-kube-api-access-pmxdd" (OuterVolumeSpecName: "kube-api-access-pmxdd") pod "a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03" (UID: "a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03"). InnerVolumeSpecName "kube-api-access-pmxdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.796261 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03" (UID: "a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.891818 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmxdd\" (UniqueName: \"kubernetes.io/projected/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-kube-api-access-pmxdd\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.891849 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:02 crc kubenswrapper[4780]: I1205 07:00:02.891858 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:03 crc kubenswrapper[4780]: I1205 07:00:03.494445 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" event={"ID":"a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03","Type":"ContainerDied","Data":"376b1f43247381f0dd9f4fe8fb79a2a2dba0018c7a4b48ad9daaca1b0f6ef764"} Dec 05 07:00:03 crc kubenswrapper[4780]: I1205 07:00:03.494799 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="376b1f43247381f0dd9f4fe8fb79a2a2dba0018c7a4b48ad9daaca1b0f6ef764" Dec 05 07:00:03 crc kubenswrapper[4780]: I1205 07:00:03.494568 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh" Dec 05 07:00:09 crc kubenswrapper[4780]: I1205 07:00:09.540343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" event={"ID":"00f67bb2-0ac4-4e3c-b17c-733b12b5fde8","Type":"ContainerStarted","Data":"2dec5a920770fd277b748ed576bb1103435bcb74323e653721369769d42c3299"} Dec 05 07:00:09 crc kubenswrapper[4780]: I1205 07:00:09.541609 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:09 crc kubenswrapper[4780]: I1205 07:00:09.558900 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" podStartSLOduration=2.218077945 podStartE2EDuration="8.558858828s" podCreationTimestamp="2025-12-05 07:00:01 +0000 UTC" firstStartedPulling="2025-12-05 07:00:02.121451634 +0000 UTC m=+836.190967966" lastFinishedPulling="2025-12-05 07:00:08.462232517 +0000 UTC m=+842.531748849" observedRunningTime="2025-12-05 07:00:09.555127698 +0000 UTC m=+843.624644030" watchObservedRunningTime="2025-12-05 07:00:09.558858828 +0000 UTC m=+843.628375160" Dec 05 07:00:15 crc kubenswrapper[4780]: I1205 07:00:15.577649 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" event={"ID":"f65804ab-3d85-427c-9143-5092175e82f9","Type":"ContainerStarted","Data":"0151a57e5d4fa2a7d2ed70419618a290ee9bdbe08a36b2206f35460d957524b1"} Dec 05 07:00:15 crc kubenswrapper[4780]: I1205 07:00:15.579139 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:15 crc kubenswrapper[4780]: I1205 07:00:15.599945 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" podStartSLOduration=2.144684223 podStartE2EDuration="14.599927629s" podCreationTimestamp="2025-12-05 07:00:01 +0000 UTC" firstStartedPulling="2025-12-05 07:00:02.035594743 +0000 UTC m=+836.105111075" lastFinishedPulling="2025-12-05 07:00:14.490838149 +0000 UTC m=+848.560354481" observedRunningTime="2025-12-05 07:00:15.59652245 +0000 UTC m=+849.666038782" watchObservedRunningTime="2025-12-05 07:00:15.599927629 +0000 UTC m=+849.669443961" Dec 05 07:00:21 crc kubenswrapper[4780]: I1205 07:00:21.884713 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7d695447b7-xktbz" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.433384 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7hvlv"] Dec 05 07:00:22 crc kubenswrapper[4780]: E1205 07:00:22.433936 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03" containerName="collect-profiles" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.433956 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03" containerName="collect-profiles" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.434094 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03" containerName="collect-profiles" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.434905 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.446162 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hvlv"] Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.536305 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-catalog-content\") pod \"community-operators-7hvlv\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.536450 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-utilities\") pod \"community-operators-7hvlv\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.536502 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddnt9\" (UniqueName: \"kubernetes.io/projected/df9458ed-385a-4620-9802-9bd7b6665d48-kube-api-access-ddnt9\") pod \"community-operators-7hvlv\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.638050 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddnt9\" (UniqueName: \"kubernetes.io/projected/df9458ed-385a-4620-9802-9bd7b6665d48-kube-api-access-ddnt9\") pod \"community-operators-7hvlv\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.638117 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-catalog-content\") pod \"community-operators-7hvlv\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.638197 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-utilities\") pod \"community-operators-7hvlv\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.638742 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-catalog-content\") pod \"community-operators-7hvlv\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.638800 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-utilities\") pod \"community-operators-7hvlv\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.665009 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddnt9\" (UniqueName: \"kubernetes.io/projected/df9458ed-385a-4620-9802-9bd7b6665d48-kube-api-access-ddnt9\") pod \"community-operators-7hvlv\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:22 crc kubenswrapper[4780]: I1205 07:00:22.751353 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:23 crc kubenswrapper[4780]: I1205 07:00:23.194012 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hvlv"] Dec 05 07:00:23 crc kubenswrapper[4780]: I1205 07:00:23.616250 4780 generic.go:334] "Generic (PLEG): container finished" podID="df9458ed-385a-4620-9802-9bd7b6665d48" containerID="391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed" exitCode=0 Dec 05 07:00:23 crc kubenswrapper[4780]: I1205 07:00:23.616294 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hvlv" event={"ID":"df9458ed-385a-4620-9802-9bd7b6665d48","Type":"ContainerDied","Data":"391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed"} Dec 05 07:00:23 crc kubenswrapper[4780]: I1205 07:00:23.616319 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hvlv" event={"ID":"df9458ed-385a-4620-9802-9bd7b6665d48","Type":"ContainerStarted","Data":"96795b90f6f4502361de4d40a9d0fbe2614fb99190aee2805be124445ebe39c9"} Dec 05 07:00:24 crc kubenswrapper[4780]: I1205 07:00:24.623531 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hvlv" event={"ID":"df9458ed-385a-4620-9802-9bd7b6665d48","Type":"ContainerStarted","Data":"ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574"} Dec 05 07:00:25 crc kubenswrapper[4780]: I1205 07:00:25.630919 4780 generic.go:334] "Generic (PLEG): container finished" podID="df9458ed-385a-4620-9802-9bd7b6665d48" containerID="ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574" exitCode=0 Dec 05 07:00:25 crc kubenswrapper[4780]: I1205 07:00:25.630962 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hvlv" event={"ID":"df9458ed-385a-4620-9802-9bd7b6665d48","Type":"ContainerDied","Data":"ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574"} Dec 05 07:00:26 crc kubenswrapper[4780]: I1205 07:00:26.637981 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hvlv" event={"ID":"df9458ed-385a-4620-9802-9bd7b6665d48","Type":"ContainerStarted","Data":"d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e"} Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.493006 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7hvlv" podStartSLOduration=7.076891199 podStartE2EDuration="9.492987907s" podCreationTimestamp="2025-12-05 07:00:22 +0000 UTC" firstStartedPulling="2025-12-05 07:00:23.617995744 +0000 UTC m=+857.687512076" lastFinishedPulling="2025-12-05 07:00:26.034092452 +0000 UTC m=+860.103608784" observedRunningTime="2025-12-05 07:00:26.65241884 +0000 UTC m=+860.721935182" watchObservedRunningTime="2025-12-05 07:00:31.492987907 +0000 UTC m=+865.562504239" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.498564 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xh927"] Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.499848 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.524281 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xh927"] Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.545638 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-catalog-content\") pod \"redhat-marketplace-xh927\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.545716 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-utilities\") pod \"redhat-marketplace-xh927\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.545781 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbsz\" (UniqueName: \"kubernetes.io/projected/e5a4ae54-b43f-44f8-846c-1692c54398ea-kube-api-access-jxbsz\") pod \"redhat-marketplace-xh927\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.646750 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxbsz\" (UniqueName: \"kubernetes.io/projected/e5a4ae54-b43f-44f8-846c-1692c54398ea-kube-api-access-jxbsz\") pod \"redhat-marketplace-xh927\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.646845 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-catalog-content\") pod \"redhat-marketplace-xh927\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.646911 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-utilities\") pod \"redhat-marketplace-xh927\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.647428 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-utilities\") pod \"redhat-marketplace-xh927\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.648369 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-catalog-content\") pod \"redhat-marketplace-xh927\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.670747 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxbsz\" (UniqueName: \"kubernetes.io/projected/e5a4ae54-b43f-44f8-846c-1692c54398ea-kube-api-access-jxbsz\") pod \"redhat-marketplace-xh927\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:31 crc kubenswrapper[4780]: I1205 07:00:31.815618 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:32 crc kubenswrapper[4780]: I1205 07:00:32.104993 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xh927"] Dec 05 07:00:32 crc kubenswrapper[4780]: I1205 07:00:32.668612 4780 generic.go:334] "Generic (PLEG): container finished" podID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerID="111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd" exitCode=0 Dec 05 07:00:32 crc kubenswrapper[4780]: I1205 07:00:32.668661 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xh927" event={"ID":"e5a4ae54-b43f-44f8-846c-1692c54398ea","Type":"ContainerDied","Data":"111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd"} Dec 05 07:00:32 crc kubenswrapper[4780]: I1205 07:00:32.668696 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xh927" event={"ID":"e5a4ae54-b43f-44f8-846c-1692c54398ea","Type":"ContainerStarted","Data":"6853c41c5bea13dca04f23db3fc18720e14353152134ee2fe43e95bf85dc95a0"} Dec 05 07:00:32 crc kubenswrapper[4780]: I1205 07:00:32.752048 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:32 crc kubenswrapper[4780]: I1205 07:00:32.752093 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:32 crc kubenswrapper[4780]: I1205 07:00:32.792486 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:33 crc kubenswrapper[4780]: I1205 07:00:33.675293 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xh927" event={"ID":"e5a4ae54-b43f-44f8-846c-1692c54398ea","Type":"ContainerStarted","Data":"f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10"} Dec 05 07:00:33 crc kubenswrapper[4780]: I1205 07:00:33.740385 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:34 crc kubenswrapper[4780]: I1205 07:00:34.794674 4780 generic.go:334] "Generic (PLEG): container finished" podID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerID="f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10" exitCode=0 Dec 05 07:00:34 crc kubenswrapper[4780]: I1205 07:00:34.795364 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xh927" event={"ID":"e5a4ae54-b43f-44f8-846c-1692c54398ea","Type":"ContainerDied","Data":"f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10"} Dec 05 07:00:35 crc kubenswrapper[4780]: I1205 07:00:35.020680 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hvlv"] Dec 05 07:00:35 crc kubenswrapper[4780]: I1205 07:00:35.800454 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7hvlv" podUID="df9458ed-385a-4620-9802-9bd7b6665d48" containerName="registry-server" containerID="cri-o://d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e" gracePeriod=2 Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.182733 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.226414 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-catalog-content\") pod \"df9458ed-385a-4620-9802-9bd7b6665d48\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.226473 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddnt9\" (UniqueName: \"kubernetes.io/projected/df9458ed-385a-4620-9802-9bd7b6665d48-kube-api-access-ddnt9\") pod \"df9458ed-385a-4620-9802-9bd7b6665d48\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.226509 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-utilities\") pod \"df9458ed-385a-4620-9802-9bd7b6665d48\" (UID: \"df9458ed-385a-4620-9802-9bd7b6665d48\") " Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.227627 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-utilities" (OuterVolumeSpecName: "utilities") pod "df9458ed-385a-4620-9802-9bd7b6665d48" (UID: "df9458ed-385a-4620-9802-9bd7b6665d48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.234058 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9458ed-385a-4620-9802-9bd7b6665d48-kube-api-access-ddnt9" (OuterVolumeSpecName: "kube-api-access-ddnt9") pod "df9458ed-385a-4620-9802-9bd7b6665d48" (UID: "df9458ed-385a-4620-9802-9bd7b6665d48"). InnerVolumeSpecName "kube-api-access-ddnt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.275322 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df9458ed-385a-4620-9802-9bd7b6665d48" (UID: "df9458ed-385a-4620-9802-9bd7b6665d48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.327941 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.327989 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddnt9\" (UniqueName: \"kubernetes.io/projected/df9458ed-385a-4620-9802-9bd7b6665d48-kube-api-access-ddnt9\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.328002 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df9458ed-385a-4620-9802-9bd7b6665d48-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.807138 4780 generic.go:334] "Generic (PLEG): container finished" podID="df9458ed-385a-4620-9802-9bd7b6665d48" containerID="d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e" exitCode=0 Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.807196 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hvlv" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.807225 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hvlv" event={"ID":"df9458ed-385a-4620-9802-9bd7b6665d48","Type":"ContainerDied","Data":"d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e"} Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.807297 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hvlv" event={"ID":"df9458ed-385a-4620-9802-9bd7b6665d48","Type":"ContainerDied","Data":"96795b90f6f4502361de4d40a9d0fbe2614fb99190aee2805be124445ebe39c9"} Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.807316 4780 scope.go:117] "RemoveContainer" containerID="d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.809583 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xh927" event={"ID":"e5a4ae54-b43f-44f8-846c-1692c54398ea","Type":"ContainerStarted","Data":"1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c"} Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.832773 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xh927" podStartSLOduration=2.749285847 podStartE2EDuration="5.8327474s" podCreationTimestamp="2025-12-05 07:00:31 +0000 UTC" firstStartedPulling="2025-12-05 07:00:32.670711177 +0000 UTC m=+866.740227519" lastFinishedPulling="2025-12-05 07:00:35.75417274 +0000 UTC m=+869.823689072" observedRunningTime="2025-12-05 07:00:36.830869231 +0000 UTC m=+870.900385563" watchObservedRunningTime="2025-12-05 07:00:36.8327474 +0000 UTC m=+870.902263732" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.833112 4780 scope.go:117] "RemoveContainer" containerID="ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.845224 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hvlv"] Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.848862 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7hvlv"] Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.859914 4780 scope.go:117] "RemoveContainer" containerID="391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.873655 4780 scope.go:117] "RemoveContainer" containerID="d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e" Dec 05 07:00:36 crc kubenswrapper[4780]: E1205 07:00:36.874052 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e\": container with ID starting with d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e not found: ID does not exist" containerID="d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.874094 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e"} err="failed to get container status \"d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e\": rpc error: code = NotFound desc = could not find container \"d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e\": container with ID starting with d465363530ed019519155e654bf4fd4d815aae1eef718117e71f0daf93682d4e not found: ID does not exist" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.874118 4780 scope.go:117] "RemoveContainer" containerID="ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574" Dec 05 07:00:36 crc kubenswrapper[4780]: E1205 07:00:36.874344 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574\": container with ID starting with ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574 not found: ID does not exist" containerID="ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.874376 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574"} err="failed to get container status \"ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574\": rpc error: code = NotFound desc = could not find container \"ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574\": container with ID starting with ff3633c3de1b80b36a149fe8d96a47d32a8d9e030ead95d0a56ccd1951b3a574 not found: ID does not exist" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.874394 4780 scope.go:117] "RemoveContainer" containerID="391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed" Dec 05 07:00:36 crc kubenswrapper[4780]: E1205 07:00:36.874678 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed\": container with ID starting with 391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed not found: ID does not exist" containerID="391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed" Dec 05 07:00:36 crc kubenswrapper[4780]: I1205 07:00:36.874706 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed"} err="failed to get container status \"391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed\": rpc error: code = NotFound desc = could not find container \"391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed\": container with ID starting with 391e31b9a6184af817d802eac829548c89c74c7cce4069b67f61c60ffbd0ffed not found: ID does not exist" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.624865 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m68h5"] Dec 05 07:00:37 crc kubenswrapper[4780]: E1205 07:00:37.625142 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9458ed-385a-4620-9802-9bd7b6665d48" containerName="registry-server" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.625158 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9458ed-385a-4620-9802-9bd7b6665d48" containerName="registry-server" Dec 05 07:00:37 crc kubenswrapper[4780]: E1205 07:00:37.625185 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9458ed-385a-4620-9802-9bd7b6665d48" containerName="extract-utilities" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.625193 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9458ed-385a-4620-9802-9bd7b6665d48" containerName="extract-utilities" Dec 05 07:00:37 crc kubenswrapper[4780]: E1205 07:00:37.625212 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9458ed-385a-4620-9802-9bd7b6665d48" containerName="extract-content" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.625219 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9458ed-385a-4620-9802-9bd7b6665d48" containerName="extract-content" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.625334 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9458ed-385a-4620-9802-9bd7b6665d48" containerName="registry-server" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.626253 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.653133 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m68h5"] Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.745193 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95jch\" (UniqueName: \"kubernetes.io/projected/8f8651ba-b796-491b-a747-d29f6b9ef4bd-kube-api-access-95jch\") pod \"certified-operators-m68h5\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.745263 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-utilities\") pod \"certified-operators-m68h5\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.745289 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-catalog-content\") pod \"certified-operators-m68h5\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.847380 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95jch\" (UniqueName: \"kubernetes.io/projected/8f8651ba-b796-491b-a747-d29f6b9ef4bd-kube-api-access-95jch\") pod \"certified-operators-m68h5\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.847454 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-utilities\") pod \"certified-operators-m68h5\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.847480 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-catalog-content\") pod \"certified-operators-m68h5\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.847908 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-catalog-content\") pod \"certified-operators-m68h5\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.848104 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-utilities\") pod \"certified-operators-m68h5\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.876002 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95jch\" (UniqueName: \"kubernetes.io/projected/8f8651ba-b796-491b-a747-d29f6b9ef4bd-kube-api-access-95jch\") pod \"certified-operators-m68h5\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:37 crc kubenswrapper[4780]: I1205 07:00:37.941854 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:38 crc kubenswrapper[4780]: I1205 07:00:38.149262 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9458ed-385a-4620-9802-9bd7b6665d48" path="/var/lib/kubelet/pods/df9458ed-385a-4620-9802-9bd7b6665d48/volumes" Dec 05 07:00:38 crc kubenswrapper[4780]: I1205 07:00:38.414586 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m68h5"] Dec 05 07:00:38 crc kubenswrapper[4780]: W1205 07:00:38.424952 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f8651ba_b796_491b_a747_d29f6b9ef4bd.slice/crio-311495daa26a169ff321caccd8d1cddf28d103f746d25149669558ff38a1e329 WatchSource:0}: Error finding container 311495daa26a169ff321caccd8d1cddf28d103f746d25149669558ff38a1e329: Status 404 returned error can't find the container with id 311495daa26a169ff321caccd8d1cddf28d103f746d25149669558ff38a1e329 Dec 05 07:00:38 crc kubenswrapper[4780]: I1205 07:00:38.824374 4780 generic.go:334] "Generic (PLEG): container finished" podID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerID="d3112bcbf7f7db8c7079db4459021f38df3e533b0e720fea07ddd65c2c565b24" exitCode=0 Dec 05 07:00:38 crc kubenswrapper[4780]: I1205 07:00:38.824435 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m68h5" event={"ID":"8f8651ba-b796-491b-a747-d29f6b9ef4bd","Type":"ContainerDied","Data":"d3112bcbf7f7db8c7079db4459021f38df3e533b0e720fea07ddd65c2c565b24"} Dec 05 07:00:38 crc kubenswrapper[4780]: I1205 07:00:38.824971 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m68h5" event={"ID":"8f8651ba-b796-491b-a747-d29f6b9ef4bd","Type":"ContainerStarted","Data":"311495daa26a169ff321caccd8d1cddf28d103f746d25149669558ff38a1e329"} Dec 05 07:00:40 crc kubenswrapper[4780]: I1205 07:00:40.839595 4780 generic.go:334] "Generic (PLEG): container finished" podID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerID="2467aab2d90455af4b5e5e1e5ad1fd31c4b9a24d9c6fdaebf225b9fd024070dc" exitCode=0 Dec 05 07:00:40 crc kubenswrapper[4780]: I1205 07:00:40.839920 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m68h5" event={"ID":"8f8651ba-b796-491b-a747-d29f6b9ef4bd","Type":"ContainerDied","Data":"2467aab2d90455af4b5e5e1e5ad1fd31c4b9a24d9c6fdaebf225b9fd024070dc"} Dec 05 07:00:41 crc kubenswrapper[4780]: I1205 07:00:41.816475 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:41 crc kubenswrapper[4780]: I1205 07:00:41.816785 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:41 crc kubenswrapper[4780]: I1205 07:00:41.847577 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m68h5" event={"ID":"8f8651ba-b796-491b-a747-d29f6b9ef4bd","Type":"ContainerStarted","Data":"a8381d38adcba9b4d9c52c596254f1acefdcdffac56521e233feb250e82a90ed"} Dec 05 07:00:41 crc kubenswrapper[4780]: I1205 07:00:41.850035 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:41 crc kubenswrapper[4780]: I1205 07:00:41.872946 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m68h5" podStartSLOduration=2.477409061 podStartE2EDuration="4.872925911s" podCreationTimestamp="2025-12-05 07:00:37 +0000 UTC" firstStartedPulling="2025-12-05 07:00:38.826556281 +0000 UTC m=+872.896072623" lastFinishedPulling="2025-12-05 07:00:41.222073141 +0000 UTC m=+875.291589473" observedRunningTime="2025-12-05 07:00:41.867742134 +0000 UTC m=+875.937258486" watchObservedRunningTime="2025-12-05 07:00:41.872925911 +0000 UTC m=+875.942442243" Dec 05 07:00:41 crc kubenswrapper[4780]: I1205 07:00:41.901262 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:43 crc kubenswrapper[4780]: I1205 07:00:43.617238 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xh927"] Dec 05 07:00:43 crc kubenswrapper[4780]: I1205 07:00:43.857306 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xh927" podUID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerName="registry-server" containerID="cri-o://1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c" gracePeriod=2 Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.699833 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.777323 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-catalog-content\") pod \"e5a4ae54-b43f-44f8-846c-1692c54398ea\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.777389 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxbsz\" (UniqueName: \"kubernetes.io/projected/e5a4ae54-b43f-44f8-846c-1692c54398ea-kube-api-access-jxbsz\") pod \"e5a4ae54-b43f-44f8-846c-1692c54398ea\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.777445 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-utilities\") pod \"e5a4ae54-b43f-44f8-846c-1692c54398ea\" (UID: \"e5a4ae54-b43f-44f8-846c-1692c54398ea\") " Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.778599 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-utilities" (OuterVolumeSpecName: "utilities") pod "e5a4ae54-b43f-44f8-846c-1692c54398ea" (UID: "e5a4ae54-b43f-44f8-846c-1692c54398ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.791009 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a4ae54-b43f-44f8-846c-1692c54398ea-kube-api-access-jxbsz" (OuterVolumeSpecName: "kube-api-access-jxbsz") pod "e5a4ae54-b43f-44f8-846c-1692c54398ea" (UID: "e5a4ae54-b43f-44f8-846c-1692c54398ea"). InnerVolumeSpecName "kube-api-access-jxbsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.808339 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5a4ae54-b43f-44f8-846c-1692c54398ea" (UID: "e5a4ae54-b43f-44f8-846c-1692c54398ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.872976 4780 generic.go:334] "Generic (PLEG): container finished" podID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerID="1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c" exitCode=0 Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.873018 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xh927" event={"ID":"e5a4ae54-b43f-44f8-846c-1692c54398ea","Type":"ContainerDied","Data":"1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c"} Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.873045 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xh927" event={"ID":"e5a4ae54-b43f-44f8-846c-1692c54398ea","Type":"ContainerDied","Data":"6853c41c5bea13dca04f23db3fc18720e14353152134ee2fe43e95bf85dc95a0"} Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.873064 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xh927" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.873074 4780 scope.go:117] "RemoveContainer" containerID="1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.878553 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.878600 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxbsz\" (UniqueName: \"kubernetes.io/projected/e5a4ae54-b43f-44f8-846c-1692c54398ea-kube-api-access-jxbsz\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.878612 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a4ae54-b43f-44f8-846c-1692c54398ea-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.893012 4780 scope.go:117] "RemoveContainer" containerID="f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.898987 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xh927"] Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.902787 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xh927"] Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.931368 4780 scope.go:117] "RemoveContainer" containerID="111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.945070 4780 scope.go:117] "RemoveContainer" containerID="1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c" Dec 05 07:00:44 crc kubenswrapper[4780]: E1205 07:00:44.945577 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c\": container with ID starting with 1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c not found: ID does not exist" containerID="1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.945696 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c"} err="failed to get container status \"1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c\": rpc error: code = NotFound desc = could not find container \"1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c\": container with ID starting with 1ce34729eae7c3e5f11667ed4202027727a87f0392022e0b373be45ade5ff69c not found: ID does not exist" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.945777 4780 scope.go:117] "RemoveContainer" containerID="f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10" Dec 05 07:00:44 crc kubenswrapper[4780]: E1205 07:00:44.946287 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10\": container with ID starting with f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10 not found: ID does not exist" containerID="f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.946325 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10"} err="failed to get container status \"f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10\": rpc error: code = NotFound desc = could not find container \"f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10\": container with ID starting with f97bd1c59e90a391a9fc17ea14e2c2732cb6162e3bc94357f557d5ac5e5d6f10 not found: ID does not exist" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.946356 4780 scope.go:117] "RemoveContainer" containerID="111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd" Dec 05 07:00:44 crc kubenswrapper[4780]: E1205 07:00:44.946757 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd\": container with ID starting with 111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd not found: ID does not exist" containerID="111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd" Dec 05 07:00:44 crc kubenswrapper[4780]: I1205 07:00:44.946787 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd"} err="failed to get container status \"111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd\": rpc error: code = NotFound desc = could not find container \"111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd\": container with ID starting with 111aa93ff7d7b5b7396cbff5e48a9639fda565d9cc05838926722c4e875ae9bd not found: ID does not exist" Dec 05 07:00:46 crc kubenswrapper[4780]: I1205 07:00:46.144713 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a4ae54-b43f-44f8-846c-1692c54398ea" path="/var/lib/kubelet/pods/e5a4ae54-b43f-44f8-846c-1692c54398ea/volumes" Dec 05 07:00:47 crc kubenswrapper[4780]: I1205 07:00:47.942470 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:47 crc kubenswrapper[4780]: I1205 07:00:47.942558 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:47 crc kubenswrapper[4780]: I1205 07:00:47.990785 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:48 crc kubenswrapper[4780]: I1205 07:00:48.940343 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:49 crc kubenswrapper[4780]: I1205 07:00:49.052765 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m68h5"] Dec 05 07:00:50 crc kubenswrapper[4780]: I1205 07:00:50.904301 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m68h5" podUID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerName="registry-server" containerID="cri-o://a8381d38adcba9b4d9c52c596254f1acefdcdffac56521e233feb250e82a90ed" gracePeriod=2 Dec 05 07:00:51 crc kubenswrapper[4780]: I1205 07:00:51.506329 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-544c44bb58-hzmkv" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.268916 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn"] Dec 05 07:00:52 crc kubenswrapper[4780]: E1205 07:00:52.269718 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerName="extract-content" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.269807 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerName="extract-content" Dec 05 07:00:52 crc kubenswrapper[4780]: E1205 07:00:52.269883 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerName="extract-utilities" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.270027 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerName="extract-utilities" Dec 05 07:00:52 crc kubenswrapper[4780]: E1205 07:00:52.270117 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerName="registry-server" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.270204 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerName="registry-server" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.270385 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a4ae54-b43f-44f8-846c-1692c54398ea" containerName="registry-server" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.270944 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.274697 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-xcpwx" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.275081 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.295454 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-27nsf"] Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.298124 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn"] Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.298251 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.301654 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.304185 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.360671 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gffn7"] Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.361731 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.364380 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9jx2r" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.364406 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.364385 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.365508 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.366173 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-j5f55"] Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.367465 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.369172 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.383309 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-j5f55"] Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.383856 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkwft\" (UniqueName: \"kubernetes.io/projected/df87efac-4c66-45a5-86d4-9a36f7e21a53-kube-api-access-dkwft\") pod \"frr-k8s-webhook-server-7fcb986d4-4plmn\" (UID: \"df87efac-4c66-45a5-86d4-9a36f7e21a53\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.383924 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df87efac-4c66-45a5-86d4-9a36f7e21a53-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-4plmn\" (UID: \"df87efac-4c66-45a5-86d4-9a36f7e21a53\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485187 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkwft\" (UniqueName: \"kubernetes.io/projected/df87efac-4c66-45a5-86d4-9a36f7e21a53-kube-api-access-dkwft\") pod \"frr-k8s-webhook-server-7fcb986d4-4plmn\" (UID: \"df87efac-4c66-45a5-86d4-9a36f7e21a53\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485227 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df87efac-4c66-45a5-86d4-9a36f7e21a53-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-4plmn\" (UID: \"df87efac-4c66-45a5-86d4-9a36f7e21a53\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485255 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-frr-sockets\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485296 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-metallb-excludel2\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485354 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-memberlist\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485462 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73-metrics-certs\") pod \"controller-f8648f98b-j5f55\" (UID: \"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73\") " pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485540 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-reloader\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485602 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rclzm\" (UniqueName: \"kubernetes.io/projected/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-kube-api-access-rclzm\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485674 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlth7\" (UniqueName: \"kubernetes.io/projected/8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73-kube-api-access-dlth7\") pod \"controller-f8648f98b-j5f55\" (UID: \"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73\") " pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485716 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-frr-conf\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485768 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b354ba59-4664-4c61-abe6-e31896facfa5-frr-startup\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485826 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-metrics-certs\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485856 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73-cert\") pod \"controller-f8648f98b-j5f55\" (UID: \"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73\") " pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485890 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p4jz\" (UniqueName: \"kubernetes.io/projected/b354ba59-4664-4c61-abe6-e31896facfa5-kube-api-access-2p4jz\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485916 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-metrics\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.485928 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b354ba59-4664-4c61-abe6-e31896facfa5-metrics-certs\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.491538 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df87efac-4c66-45a5-86d4-9a36f7e21a53-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-4plmn\" (UID: \"df87efac-4c66-45a5-86d4-9a36f7e21a53\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.503073 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkwft\" (UniqueName: \"kubernetes.io/projected/df87efac-4c66-45a5-86d4-9a36f7e21a53-kube-api-access-dkwft\") pod \"frr-k8s-webhook-server-7fcb986d4-4plmn\" (UID: \"df87efac-4c66-45a5-86d4-9a36f7e21a53\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.586957 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587298 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rclzm\" (UniqueName: \"kubernetes.io/projected/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-kube-api-access-rclzm\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587363 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlth7\" (UniqueName: \"kubernetes.io/projected/8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73-kube-api-access-dlth7\") pod \"controller-f8648f98b-j5f55\" (UID: \"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73\") " pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587389 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-frr-conf\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587413 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b354ba59-4664-4c61-abe6-e31896facfa5-frr-startup\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587431 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-metrics-certs\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587450 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73-cert\") pod \"controller-f8648f98b-j5f55\" (UID: \"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73\") " pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587469 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p4jz\" (UniqueName: \"kubernetes.io/projected/b354ba59-4664-4c61-abe6-e31896facfa5-kube-api-access-2p4jz\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587673 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-metrics\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587695 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b354ba59-4664-4c61-abe6-e31896facfa5-metrics-certs\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587726 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-frr-sockets\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587758 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-metallb-excludel2\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587784 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-memberlist\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587812 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73-metrics-certs\") pod \"controller-f8648f98b-j5f55\" (UID: \"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73\") " pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.587833 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-reloader\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.588007 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-frr-conf\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.588191 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-reloader\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: E1205 07:00:52.588588 4780 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 07:00:52 crc kubenswrapper[4780]: E1205 07:00:52.588730 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-memberlist podName:aa62ab99-7a56-4e90-bbaa-0cd417c05ab2 nodeName:}" failed. No retries permitted until 2025-12-05 07:00:53.088702296 +0000 UTC m=+887.158218628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-memberlist") pod "speaker-gffn7" (UID: "aa62ab99-7a56-4e90-bbaa-0cd417c05ab2") : secret "metallb-memberlist" not found Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.588916 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-frr-sockets\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.589005 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b354ba59-4664-4c61-abe6-e31896facfa5-frr-startup\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.589282 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-metallb-excludel2\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.589358 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b354ba59-4664-4c61-abe6-e31896facfa5-metrics\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.589679 4780 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.592613 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73-metrics-certs\") pod \"controller-f8648f98b-j5f55\" (UID: \"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73\") " pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.608853 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b354ba59-4664-4c61-abe6-e31896facfa5-metrics-certs\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.609222 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-metrics-certs\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.613530 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73-cert\") pod \"controller-f8648f98b-j5f55\" (UID: \"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73\") " pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.618086 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlth7\" (UniqueName: \"kubernetes.io/projected/8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73-kube-api-access-dlth7\") pod \"controller-f8648f98b-j5f55\" (UID: \"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73\") " pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.634939 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rclzm\" (UniqueName: \"kubernetes.io/projected/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-kube-api-access-rclzm\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.639935 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p4jz\" (UniqueName: \"kubernetes.io/projected/b354ba59-4664-4c61-abe6-e31896facfa5-kube-api-access-2p4jz\") pod \"frr-k8s-27nsf\" (UID: \"b354ba59-4664-4c61-abe6-e31896facfa5\") " pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.654277 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-27nsf" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.684383 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.882601 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn"] Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.914698 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-j5f55"] Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.931337 4780 generic.go:334] "Generic (PLEG): container finished" podID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerID="a8381d38adcba9b4d9c52c596254f1acefdcdffac56521e233feb250e82a90ed" exitCode=0 Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.931423 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m68h5" event={"ID":"8f8651ba-b796-491b-a747-d29f6b9ef4bd","Type":"ContainerDied","Data":"a8381d38adcba9b4d9c52c596254f1acefdcdffac56521e233feb250e82a90ed"} Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.936683 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-27nsf" event={"ID":"b354ba59-4664-4c61-abe6-e31896facfa5","Type":"ContainerStarted","Data":"ba25a4c62a5d43a7c37322e672c22305ed257acde465c9ff88ef80dbbdbd7bde"} Dec 05 07:00:52 crc kubenswrapper[4780]: I1205 07:00:52.938237 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" event={"ID":"df87efac-4c66-45a5-86d4-9a36f7e21a53","Type":"ContainerStarted","Data":"ff9d87239f36f3eaa400b90d437669dd717864fdb9c5953741dcb3028b133bae"} Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.044998 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.097533 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-memberlist\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:53 crc kubenswrapper[4780]: E1205 07:00:53.097695 4780 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 07:00:53 crc kubenswrapper[4780]: E1205 07:00:53.097759 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-memberlist podName:aa62ab99-7a56-4e90-bbaa-0cd417c05ab2 nodeName:}" failed. No retries permitted until 2025-12-05 07:00:54.097743839 +0000 UTC m=+888.167260171 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-memberlist") pod "speaker-gffn7" (UID: "aa62ab99-7a56-4e90-bbaa-0cd417c05ab2") : secret "metallb-memberlist" not found Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.198705 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-utilities\") pod \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.198812 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-catalog-content\") pod \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.198866 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95jch\" (UniqueName: \"kubernetes.io/projected/8f8651ba-b796-491b-a747-d29f6b9ef4bd-kube-api-access-95jch\") pod \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\" (UID: \"8f8651ba-b796-491b-a747-d29f6b9ef4bd\") " Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.199945 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-utilities" (OuterVolumeSpecName: "utilities") pod "8f8651ba-b796-491b-a747-d29f6b9ef4bd" (UID: "8f8651ba-b796-491b-a747-d29f6b9ef4bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.212057 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f8651ba-b796-491b-a747-d29f6b9ef4bd-kube-api-access-95jch" (OuterVolumeSpecName: "kube-api-access-95jch") pod "8f8651ba-b796-491b-a747-d29f6b9ef4bd" (UID: "8f8651ba-b796-491b-a747-d29f6b9ef4bd"). InnerVolumeSpecName "kube-api-access-95jch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.245291 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f8651ba-b796-491b-a747-d29f6b9ef4bd" (UID: "8f8651ba-b796-491b-a747-d29f6b9ef4bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.300603 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.300639 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95jch\" (UniqueName: \"kubernetes.io/projected/8f8651ba-b796-491b-a747-d29f6b9ef4bd-kube-api-access-95jch\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.300655 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f8651ba-b796-491b-a747-d29f6b9ef4bd-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.946373 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m68h5" event={"ID":"8f8651ba-b796-491b-a747-d29f6b9ef4bd","Type":"ContainerDied","Data":"311495daa26a169ff321caccd8d1cddf28d103f746d25149669558ff38a1e329"} Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.946428 4780 scope.go:117] "RemoveContainer" containerID="a8381d38adcba9b4d9c52c596254f1acefdcdffac56521e233feb250e82a90ed" Dec 05 07:00:53 crc kubenswrapper[4780]: I1205 07:00:53.946555 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m68h5" Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:53.950384 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-j5f55" event={"ID":"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73","Type":"ContainerStarted","Data":"b8c35290d47b1c53a0546bc7581eff1479af451a1b994506278a3d4c6bff7648"} Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:53.950417 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-j5f55" event={"ID":"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73","Type":"ContainerStarted","Data":"45436b872c809416262d574ac1b58a3cc8b2688948211951128c454bd2c5b3d7"} Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:53.950432 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-j5f55" event={"ID":"8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73","Type":"ContainerStarted","Data":"8fa89e0af547de158d850ad64577812c711499aa8b703cdeaa567b3e124370c8"} Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:53.950523 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:53.965525 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-j5f55" podStartSLOduration=1.965505663 podStartE2EDuration="1.965505663s" podCreationTimestamp="2025-12-05 07:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:00:53.964653242 +0000 UTC m=+888.034169574" watchObservedRunningTime="2025-12-05 07:00:53.965505663 +0000 UTC m=+888.035021995" Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:53.966393 4780 scope.go:117] "RemoveContainer" containerID="2467aab2d90455af4b5e5e1e5ad1fd31c4b9a24d9c6fdaebf225b9fd024070dc" Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:53.987175 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m68h5"] Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:53.990995 4780 scope.go:117] "RemoveContainer" containerID="d3112bcbf7f7db8c7079db4459021f38df3e533b0e720fea07ddd65c2c565b24" Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:53.996972 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m68h5"] Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:54.110318 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-memberlist\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:54.115131 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa62ab99-7a56-4e90-bbaa-0cd417c05ab2-memberlist\") pod \"speaker-gffn7\" (UID: \"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2\") " pod="metallb-system/speaker-gffn7" Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:54.146166 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" path="/var/lib/kubelet/pods/8f8651ba-b796-491b-a747-d29f6b9ef4bd/volumes" Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:54.176312 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gffn7" Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:54.963147 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gffn7" event={"ID":"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2","Type":"ContainerStarted","Data":"acce3da7b8f4f967fe4ab785583c7f5a5b9bde9be7c46a8022efa0b0095ed22f"} Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:54.963198 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gffn7" event={"ID":"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2","Type":"ContainerStarted","Data":"cef28293effa5e893b50475289eff185362e34f8f415b734cdf5e869d561b437"} Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:54.963211 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gffn7" event={"ID":"aa62ab99-7a56-4e90-bbaa-0cd417c05ab2","Type":"ContainerStarted","Data":"140529c161fa9c78b0fe7721107416b006f10d98ac1f637d243f5fd3ad6eaf8c"} Dec 05 07:00:54 crc kubenswrapper[4780]: I1205 07:00:54.964127 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gffn7" Dec 05 07:00:56 crc kubenswrapper[4780]: I1205 07:00:56.165936 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gffn7" podStartSLOduration=4.16592109 podStartE2EDuration="4.16592109s" podCreationTimestamp="2025-12-05 07:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:00:54.994413842 +0000 UTC m=+889.063930174" watchObservedRunningTime="2025-12-05 07:00:56.16592109 +0000 UTC m=+890.235437422" Dec 05 07:01:01 crc kubenswrapper[4780]: I1205 07:01:01.010987 4780 generic.go:334] "Generic (PLEG): container finished" podID="b354ba59-4664-4c61-abe6-e31896facfa5" containerID="a745a3fb1f32b8033eaa6622d9fb806d5cfc82b0be3e150c804dd985be86df0e" exitCode=0 Dec 05 07:01:01 crc kubenswrapper[4780]: I1205 07:01:01.011033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-27nsf" event={"ID":"b354ba59-4664-4c61-abe6-e31896facfa5","Type":"ContainerDied","Data":"a745a3fb1f32b8033eaa6622d9fb806d5cfc82b0be3e150c804dd985be86df0e"} Dec 05 07:01:01 crc kubenswrapper[4780]: I1205 07:01:01.014019 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" event={"ID":"df87efac-4c66-45a5-86d4-9a36f7e21a53","Type":"ContainerStarted","Data":"af4477c7eb3772c4798ea40796f2e442e0134d55a45fff244f42b4f25754755d"} Dec 05 07:01:01 crc kubenswrapper[4780]: I1205 07:01:01.014180 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" Dec 05 07:01:01 crc kubenswrapper[4780]: I1205 07:01:01.059021 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" podStartSLOduration=1.37437094 podStartE2EDuration="9.058998404s" podCreationTimestamp="2025-12-05 07:00:52 +0000 UTC" firstStartedPulling="2025-12-05 07:00:52.903130428 +0000 UTC m=+886.972646760" lastFinishedPulling="2025-12-05 07:01:00.587757892 +0000 UTC m=+894.657274224" observedRunningTime="2025-12-05 07:01:01.051413195 +0000 UTC m=+895.120929537" watchObservedRunningTime="2025-12-05 07:01:01.058998404 +0000 UTC m=+895.128514736" Dec 05 07:01:02 crc kubenswrapper[4780]: I1205 07:01:02.021663 4780 generic.go:334] "Generic (PLEG): container finished" podID="b354ba59-4664-4c61-abe6-e31896facfa5" containerID="dc756f2e11eacc95463ba75decfafdfb65d3fbd98f32d98e97ce4ec604415aa2" exitCode=0 Dec 05 07:01:02 crc kubenswrapper[4780]: I1205 07:01:02.021757 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-27nsf" event={"ID":"b354ba59-4664-4c61-abe6-e31896facfa5","Type":"ContainerDied","Data":"dc756f2e11eacc95463ba75decfafdfb65d3fbd98f32d98e97ce4ec604415aa2"} Dec 05 07:01:03 crc kubenswrapper[4780]: I1205 07:01:03.031227 4780 generic.go:334] "Generic (PLEG): container finished" podID="b354ba59-4664-4c61-abe6-e31896facfa5" containerID="7c551cf109cb5b3d61db3dc1feac143cd46a5392b64d0fc112ae291c2feda287" exitCode=0 Dec 05 07:01:03 crc kubenswrapper[4780]: I1205 07:01:03.031272 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-27nsf" event={"ID":"b354ba59-4664-4c61-abe6-e31896facfa5","Type":"ContainerDied","Data":"7c551cf109cb5b3d61db3dc1feac143cd46a5392b64d0fc112ae291c2feda287"} Dec 05 07:01:04 crc kubenswrapper[4780]: I1205 07:01:04.041132 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-27nsf" event={"ID":"b354ba59-4664-4c61-abe6-e31896facfa5","Type":"ContainerStarted","Data":"80d66090c41de0a7bf10c06fc7106d55c8cc167aaaea1bd9049df2c37cf906e4"} Dec 05 07:01:04 crc kubenswrapper[4780]: I1205 07:01:04.041413 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-27nsf" event={"ID":"b354ba59-4664-4c61-abe6-e31896facfa5","Type":"ContainerStarted","Data":"eef9a24a3b7b8a4ab11d02a41153d7b0c8f457bbe2c61f635050346bad9e2d34"} Dec 05 07:01:04 crc kubenswrapper[4780]: I1205 07:01:04.041424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-27nsf" event={"ID":"b354ba59-4664-4c61-abe6-e31896facfa5","Type":"ContainerStarted","Data":"0b841b593dbad81950766f11b4cf377ac42e7a7111856899100fe1da4a9a5f5d"} Dec 05 07:01:04 crc kubenswrapper[4780]: I1205 07:01:04.041433 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-27nsf" event={"ID":"b354ba59-4664-4c61-abe6-e31896facfa5","Type":"ContainerStarted","Data":"1ba6e5d3872bc05ebba8ed4487edb9cb52f837b46f02f4da41ae559aeb5a7ef1"} Dec 05 07:01:04 crc kubenswrapper[4780]: I1205 07:01:04.041442 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-27nsf" event={"ID":"b354ba59-4664-4c61-abe6-e31896facfa5","Type":"ContainerStarted","Data":"3b4ac04233b94fbae9c656a75da531532f397b3720a8ccf48371b55446f6e156"} Dec 05 07:01:04 crc kubenswrapper[4780]: I1205 07:01:04.041452 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-27nsf" event={"ID":"b354ba59-4664-4c61-abe6-e31896facfa5","Type":"ContainerStarted","Data":"b4e075eb26bbc57c9bfbfb027a1490e0518b76ecd4ba73e3eb6df5a03053edc7"} Dec 05 07:01:04 crc kubenswrapper[4780]: I1205 07:01:04.041515 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-27nsf" Dec 05 07:01:04 crc kubenswrapper[4780]: I1205 07:01:04.181225 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gffn7" Dec 05 07:01:04 crc kubenswrapper[4780]: I1205 07:01:04.195850 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-27nsf" podStartSLOduration=4.398770053 podStartE2EDuration="12.195828784s" podCreationTimestamp="2025-12-05 07:00:52 +0000 UTC" firstStartedPulling="2025-12-05 07:00:52.807711776 +0000 UTC m=+886.877228098" lastFinishedPulling="2025-12-05 07:01:00.604770497 +0000 UTC m=+894.674286829" observedRunningTime="2025-12-05 07:01:04.064571974 +0000 UTC m=+898.134088306" watchObservedRunningTime="2025-12-05 07:01:04.195828784 +0000 UTC m=+898.265345116" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.838641 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx"] Dec 05 07:01:05 crc kubenswrapper[4780]: E1205 07:01:05.839119 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerName="extract-utilities" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.839131 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerName="extract-utilities" Dec 05 07:01:05 crc kubenswrapper[4780]: E1205 07:01:05.839151 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerName="registry-server" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.839157 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerName="registry-server" Dec 05 07:01:05 crc kubenswrapper[4780]: E1205 07:01:05.839170 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerName="extract-content" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.839177 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerName="extract-content" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.839277 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f8651ba-b796-491b-a747-d29f6b9ef4bd" containerName="registry-server" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.854205 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.857357 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.858645 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx"] Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.890853 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pvmk\" (UniqueName: \"kubernetes.io/projected/17bffe80-37bd-4fa7-8db9-fd583dbe069e-kube-api-access-9pvmk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.890994 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.891045 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.992002 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pvmk\" (UniqueName: \"kubernetes.io/projected/17bffe80-37bd-4fa7-8db9-fd583dbe069e-kube-api-access-9pvmk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.992078 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.992104 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.992601 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:05 crc kubenswrapper[4780]: I1205 07:01:05.992770 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:06 crc kubenswrapper[4780]: I1205 07:01:06.012979 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pvmk\" (UniqueName: \"kubernetes.io/projected/17bffe80-37bd-4fa7-8db9-fd583dbe069e-kube-api-access-9pvmk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:06 crc kubenswrapper[4780]: I1205 07:01:06.216431 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 07:01:06 crc kubenswrapper[4780]: I1205 07:01:06.224958 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:06 crc kubenswrapper[4780]: I1205 07:01:06.684026 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx"] Dec 05 07:01:06 crc kubenswrapper[4780]: W1205 07:01:06.690622 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17bffe80_37bd_4fa7_8db9_fd583dbe069e.slice/crio-9d4e1a41e86ab590a8b3720189d53df8555589bc5729b1b0e9921173f12e327e WatchSource:0}: Error finding container 9d4e1a41e86ab590a8b3720189d53df8555589bc5729b1b0e9921173f12e327e: Status 404 returned error can't find the container with id 9d4e1a41e86ab590a8b3720189d53df8555589bc5729b1b0e9921173f12e327e Dec 05 07:01:07 crc kubenswrapper[4780]: I1205 07:01:07.057724 4780 generic.go:334] "Generic (PLEG): container finished" podID="17bffe80-37bd-4fa7-8db9-fd583dbe069e" containerID="2b1d7c90392734ae66fa6bad29a15506083f8a20e85ec4438766b8c35a32f6b3" exitCode=0 Dec 05 07:01:07 crc kubenswrapper[4780]: I1205 07:01:07.057845 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" event={"ID":"17bffe80-37bd-4fa7-8db9-fd583dbe069e","Type":"ContainerDied","Data":"2b1d7c90392734ae66fa6bad29a15506083f8a20e85ec4438766b8c35a32f6b3"} Dec 05 07:01:07 crc kubenswrapper[4780]: I1205 07:01:07.059332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" event={"ID":"17bffe80-37bd-4fa7-8db9-fd583dbe069e","Type":"ContainerStarted","Data":"9d4e1a41e86ab590a8b3720189d53df8555589bc5729b1b0e9921173f12e327e"} Dec 05 07:01:07 crc kubenswrapper[4780]: I1205 07:01:07.655177 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-27nsf" Dec 05 07:01:07 crc kubenswrapper[4780]: I1205 07:01:07.690785 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-27nsf" Dec 05 07:01:12 crc kubenswrapper[4780]: I1205 07:01:12.098490 4780 generic.go:334] "Generic (PLEG): container finished" podID="17bffe80-37bd-4fa7-8db9-fd583dbe069e" containerID="471f4e0859181382f614056403d67065c3e18502e0c9e36988363f9afb8c09b2" exitCode=0 Dec 05 07:01:12 crc kubenswrapper[4780]: I1205 07:01:12.098591 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" event={"ID":"17bffe80-37bd-4fa7-8db9-fd583dbe069e","Type":"ContainerDied","Data":"471f4e0859181382f614056403d67065c3e18502e0c9e36988363f9afb8c09b2"} Dec 05 07:01:12 crc kubenswrapper[4780]: I1205 07:01:12.590787 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" Dec 05 07:01:12 crc kubenswrapper[4780]: I1205 07:01:12.687968 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-j5f55" Dec 05 07:01:13 crc kubenswrapper[4780]: I1205 07:01:13.107418 4780 generic.go:334] "Generic (PLEG): container finished" podID="17bffe80-37bd-4fa7-8db9-fd583dbe069e" containerID="c2c710c49d87505a9b83bced6dcd2d82a624e65e65fe56b5691d15e0d866e2eb" exitCode=0 Dec 05 07:01:13 crc kubenswrapper[4780]: I1205 07:01:13.107442 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" event={"ID":"17bffe80-37bd-4fa7-8db9-fd583dbe069e","Type":"ContainerDied","Data":"c2c710c49d87505a9b83bced6dcd2d82a624e65e65fe56b5691d15e0d866e2eb"} Dec 05 07:01:14 crc kubenswrapper[4780]: I1205 07:01:14.370825 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:14 crc kubenswrapper[4780]: I1205 07:01:14.405623 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pvmk\" (UniqueName: \"kubernetes.io/projected/17bffe80-37bd-4fa7-8db9-fd583dbe069e-kube-api-access-9pvmk\") pod \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " Dec 05 07:01:14 crc kubenswrapper[4780]: I1205 07:01:14.405768 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-util\") pod \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " Dec 05 07:01:14 crc kubenswrapper[4780]: I1205 07:01:14.405811 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-bundle\") pod \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\" (UID: \"17bffe80-37bd-4fa7-8db9-fd583dbe069e\") " Dec 05 07:01:14 crc kubenswrapper[4780]: I1205 07:01:14.407180 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-bundle" (OuterVolumeSpecName: "bundle") pod "17bffe80-37bd-4fa7-8db9-fd583dbe069e" (UID: "17bffe80-37bd-4fa7-8db9-fd583dbe069e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:01:14 crc kubenswrapper[4780]: I1205 07:01:14.422240 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-util" (OuterVolumeSpecName: "util") pod "17bffe80-37bd-4fa7-8db9-fd583dbe069e" (UID: "17bffe80-37bd-4fa7-8db9-fd583dbe069e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:01:14 crc kubenswrapper[4780]: I1205 07:01:14.423927 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bffe80-37bd-4fa7-8db9-fd583dbe069e-kube-api-access-9pvmk" (OuterVolumeSpecName: "kube-api-access-9pvmk") pod "17bffe80-37bd-4fa7-8db9-fd583dbe069e" (UID: "17bffe80-37bd-4fa7-8db9-fd583dbe069e"). InnerVolumeSpecName "kube-api-access-9pvmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:01:14 crc kubenswrapper[4780]: I1205 07:01:14.507346 4780 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-util\") on node \"crc\" DevicePath \"\"" Dec 05 07:01:14 crc kubenswrapper[4780]: I1205 07:01:14.507389 4780 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17bffe80-37bd-4fa7-8db9-fd583dbe069e-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:01:14 crc kubenswrapper[4780]: I1205 07:01:14.507400 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pvmk\" (UniqueName: \"kubernetes.io/projected/17bffe80-37bd-4fa7-8db9-fd583dbe069e-kube-api-access-9pvmk\") on node \"crc\" DevicePath \"\"" Dec 05 07:01:15 crc kubenswrapper[4780]: I1205 07:01:15.120207 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" event={"ID":"17bffe80-37bd-4fa7-8db9-fd583dbe069e","Type":"ContainerDied","Data":"9d4e1a41e86ab590a8b3720189d53df8555589bc5729b1b0e9921173f12e327e"} Dec 05 07:01:15 crc kubenswrapper[4780]: I1205 07:01:15.120266 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx" Dec 05 07:01:15 crc kubenswrapper[4780]: I1205 07:01:15.120276 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d4e1a41e86ab590a8b3720189d53df8555589bc5729b1b0e9921173f12e327e" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.797060 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb"] Dec 05 07:01:18 crc kubenswrapper[4780]: E1205 07:01:18.797741 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bffe80-37bd-4fa7-8db9-fd583dbe069e" containerName="pull" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.797752 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bffe80-37bd-4fa7-8db9-fd583dbe069e" containerName="pull" Dec 05 07:01:18 crc kubenswrapper[4780]: E1205 07:01:18.797767 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bffe80-37bd-4fa7-8db9-fd583dbe069e" containerName="util" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.797773 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bffe80-37bd-4fa7-8db9-fd583dbe069e" containerName="util" Dec 05 07:01:18 crc kubenswrapper[4780]: E1205 07:01:18.797790 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bffe80-37bd-4fa7-8db9-fd583dbe069e" containerName="extract" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.797796 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bffe80-37bd-4fa7-8db9-fd583dbe069e" containerName="extract" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.797908 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="17bffe80-37bd-4fa7-8db9-fd583dbe069e" containerName="extract" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.798270 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.801096 4780 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-67fbg" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.801585 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.803382 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.822152 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb"] Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.859279 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92587f4f-b0ac-4050-bf77-797ba447e443-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lp9bb\" (UID: \"92587f4f-b0ac-4050-bf77-797ba447e443\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.859344 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t8jh\" (UniqueName: \"kubernetes.io/projected/92587f4f-b0ac-4050-bf77-797ba447e443-kube-api-access-7t8jh\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lp9bb\" (UID: \"92587f4f-b0ac-4050-bf77-797ba447e443\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.960605 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t8jh\" (UniqueName: \"kubernetes.io/projected/92587f4f-b0ac-4050-bf77-797ba447e443-kube-api-access-7t8jh\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lp9bb\" (UID: \"92587f4f-b0ac-4050-bf77-797ba447e443\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.960657 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92587f4f-b0ac-4050-bf77-797ba447e443-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lp9bb\" (UID: \"92587f4f-b0ac-4050-bf77-797ba447e443\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.961123 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/92587f4f-b0ac-4050-bf77-797ba447e443-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lp9bb\" (UID: \"92587f4f-b0ac-4050-bf77-797ba447e443\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" Dec 05 07:01:18 crc kubenswrapper[4780]: I1205 07:01:18.987732 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t8jh\" (UniqueName: \"kubernetes.io/projected/92587f4f-b0ac-4050-bf77-797ba447e443-kube-api-access-7t8jh\") pod \"cert-manager-operator-controller-manager-64cf6dff88-lp9bb\" (UID: \"92587f4f-b0ac-4050-bf77-797ba447e443\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" Dec 05 07:01:19 crc kubenswrapper[4780]: I1205 07:01:19.117115 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" Dec 05 07:01:19 crc kubenswrapper[4780]: I1205 07:01:19.343116 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb"] Dec 05 07:01:19 crc kubenswrapper[4780]: W1205 07:01:19.358523 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92587f4f_b0ac_4050_bf77_797ba447e443.slice/crio-9b74644e40b69b786fb2b07aaf131a381517b507bc8f024a91578c95c02b1e1c WatchSource:0}: Error finding container 9b74644e40b69b786fb2b07aaf131a381517b507bc8f024a91578c95c02b1e1c: Status 404 returned error can't find the container with id 9b74644e40b69b786fb2b07aaf131a381517b507bc8f024a91578c95c02b1e1c Dec 05 07:01:20 crc kubenswrapper[4780]: I1205 07:01:20.155154 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" event={"ID":"92587f4f-b0ac-4050-bf77-797ba447e443","Type":"ContainerStarted","Data":"9b74644e40b69b786fb2b07aaf131a381517b507bc8f024a91578c95c02b1e1c"} Dec 05 07:01:22 crc kubenswrapper[4780]: I1205 07:01:22.658476 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-27nsf" Dec 05 07:01:29 crc kubenswrapper[4780]: I1205 07:01:29.206820 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" event={"ID":"92587f4f-b0ac-4050-bf77-797ba447e443","Type":"ContainerStarted","Data":"c4f62e88c1b9c42591f4c088fc777818b95402805aa4008be849526215310231"} Dec 05 07:01:29 crc kubenswrapper[4780]: I1205 07:01:29.232972 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-lp9bb" podStartSLOduration=1.901376499 podStartE2EDuration="11.232948269s" podCreationTimestamp="2025-12-05 07:01:18 +0000 UTC" firstStartedPulling="2025-12-05 07:01:19.36103922 +0000 UTC m=+913.430555552" lastFinishedPulling="2025-12-05 07:01:28.69261099 +0000 UTC m=+922.762127322" observedRunningTime="2025-12-05 07:01:29.226902013 +0000 UTC m=+923.296418395" watchObservedRunningTime="2025-12-05 07:01:29.232948269 +0000 UTC m=+923.302464601" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.088005 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-72gg4"] Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.089126 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.091317 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.091441 4780 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-fc7dj" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.092114 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.099189 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-72gg4"] Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.146471 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37c07285-acc8-44f4-8fc3-fe29fa38cea2-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-72gg4\" (UID: \"37c07285-acc8-44f4-8fc3-fe29fa38cea2\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.146538 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlkhl\" (UniqueName: \"kubernetes.io/projected/37c07285-acc8-44f4-8fc3-fe29fa38cea2-kube-api-access-qlkhl\") pod \"cert-manager-webhook-f4fb5df64-72gg4\" (UID: \"37c07285-acc8-44f4-8fc3-fe29fa38cea2\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.248020 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37c07285-acc8-44f4-8fc3-fe29fa38cea2-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-72gg4\" (UID: \"37c07285-acc8-44f4-8fc3-fe29fa38cea2\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.248081 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlkhl\" (UniqueName: \"kubernetes.io/projected/37c07285-acc8-44f4-8fc3-fe29fa38cea2-kube-api-access-qlkhl\") pod \"cert-manager-webhook-f4fb5df64-72gg4\" (UID: \"37c07285-acc8-44f4-8fc3-fe29fa38cea2\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.265671 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlkhl\" (UniqueName: \"kubernetes.io/projected/37c07285-acc8-44f4-8fc3-fe29fa38cea2-kube-api-access-qlkhl\") pod \"cert-manager-webhook-f4fb5df64-72gg4\" (UID: \"37c07285-acc8-44f4-8fc3-fe29fa38cea2\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.265736 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37c07285-acc8-44f4-8fc3-fe29fa38cea2-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-72gg4\" (UID: \"37c07285-acc8-44f4-8fc3-fe29fa38cea2\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.404732 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" Dec 05 07:01:33 crc kubenswrapper[4780]: I1205 07:01:33.882571 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-72gg4"] Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.236770 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" event={"ID":"37c07285-acc8-44f4-8fc3-fe29fa38cea2","Type":"ContainerStarted","Data":"1f0a9ba43acbe669b4d403d3f146e2aae0ed930bbae30a47c63b4152b7e7516a"} Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.540312 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh"] Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.541083 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.543361 4780 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bv485" Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.548130 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh"] Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.671206 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-z7nlh\" (UID: \"e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.671337 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch5mw\" (UniqueName: \"kubernetes.io/projected/e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65-kube-api-access-ch5mw\") pod \"cert-manager-cainjector-855d9ccff4-z7nlh\" (UID: \"e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.772556 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch5mw\" (UniqueName: \"kubernetes.io/projected/e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65-kube-api-access-ch5mw\") pod \"cert-manager-cainjector-855d9ccff4-z7nlh\" (UID: \"e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.772646 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-z7nlh\" (UID: \"e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.789563 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-z7nlh\" (UID: \"e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.789594 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch5mw\" (UniqueName: \"kubernetes.io/projected/e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65-kube-api-access-ch5mw\") pod \"cert-manager-cainjector-855d9ccff4-z7nlh\" (UID: \"e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" Dec 05 07:01:34 crc kubenswrapper[4780]: I1205 07:01:34.860224 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" Dec 05 07:01:35 crc kubenswrapper[4780]: I1205 07:01:35.307091 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh"] Dec 05 07:01:36 crc kubenswrapper[4780]: I1205 07:01:36.247750 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" event={"ID":"e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65","Type":"ContainerStarted","Data":"b0ca97f6d01cbd2d8dfd36c9c7773af05c48ab2b1f9b4c6e5b1061ea7a79ae03"} Dec 05 07:01:44 crc kubenswrapper[4780]: I1205 07:01:44.314605 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" event={"ID":"e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65","Type":"ContainerStarted","Data":"49e92ee74d1021a5f12f69d99b043aef3c1d998070ff3c268357e89db41a1dc5"} Dec 05 07:01:44 crc kubenswrapper[4780]: I1205 07:01:44.318474 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" event={"ID":"37c07285-acc8-44f4-8fc3-fe29fa38cea2","Type":"ContainerStarted","Data":"f388a895327fffb1307aba2dc87297da82472211e6370eb24ffcae1816815c4d"} Dec 05 07:01:44 crc kubenswrapper[4780]: I1205 07:01:44.319010 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" Dec 05 07:01:44 crc kubenswrapper[4780]: I1205 07:01:44.337318 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-z7nlh" podStartSLOduration=2.157090798 podStartE2EDuration="10.337214807s" podCreationTimestamp="2025-12-05 07:01:34 +0000 UTC" firstStartedPulling="2025-12-05 07:01:35.346797016 +0000 UTC m=+929.416313348" lastFinishedPulling="2025-12-05 07:01:43.526921025 +0000 UTC m=+937.596437357" observedRunningTime="2025-12-05 07:01:44.330137355 +0000 UTC m=+938.399653697" watchObservedRunningTime="2025-12-05 07:01:44.337214807 +0000 UTC m=+938.406731139" Dec 05 07:01:44 crc kubenswrapper[4780]: I1205 07:01:44.367251 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" podStartSLOduration=1.746084555 podStartE2EDuration="11.36722887s" podCreationTimestamp="2025-12-05 07:01:33 +0000 UTC" firstStartedPulling="2025-12-05 07:01:33.891857151 +0000 UTC m=+927.961373483" lastFinishedPulling="2025-12-05 07:01:43.513001466 +0000 UTC m=+937.582517798" observedRunningTime="2025-12-05 07:01:44.359543782 +0000 UTC m=+938.429060124" watchObservedRunningTime="2025-12-05 07:01:44.36722887 +0000 UTC m=+938.436745202" Dec 05 07:01:48 crc kubenswrapper[4780]: I1205 07:01:48.408411 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-72gg4" Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.069121 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-pzxsq"] Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.070816 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-pzxsq" Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.074515 4780 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-zddg7" Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.089561 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-pzxsq"] Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.235508 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2-bound-sa-token\") pod \"cert-manager-86cb77c54b-pzxsq\" (UID: \"cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2\") " pod="cert-manager/cert-manager-86cb77c54b-pzxsq" Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.235600 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbvlg\" (UniqueName: \"kubernetes.io/projected/cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2-kube-api-access-gbvlg\") pod \"cert-manager-86cb77c54b-pzxsq\" (UID: \"cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2\") " pod="cert-manager/cert-manager-86cb77c54b-pzxsq" Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.336343 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbvlg\" (UniqueName: \"kubernetes.io/projected/cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2-kube-api-access-gbvlg\") pod \"cert-manager-86cb77c54b-pzxsq\" (UID: \"cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2\") " pod="cert-manager/cert-manager-86cb77c54b-pzxsq" Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.336747 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2-bound-sa-token\") pod \"cert-manager-86cb77c54b-pzxsq\" (UID: \"cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2\") " pod="cert-manager/cert-manager-86cb77c54b-pzxsq" Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.362731 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbvlg\" (UniqueName: \"kubernetes.io/projected/cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2-kube-api-access-gbvlg\") pod \"cert-manager-86cb77c54b-pzxsq\" (UID: \"cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2\") " pod="cert-manager/cert-manager-86cb77c54b-pzxsq" Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.363031 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2-bound-sa-token\") pod \"cert-manager-86cb77c54b-pzxsq\" (UID: \"cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2\") " pod="cert-manager/cert-manager-86cb77c54b-pzxsq" Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.386687 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-pzxsq" Dec 05 07:01:52 crc kubenswrapper[4780]: I1205 07:01:52.625450 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-pzxsq"] Dec 05 07:01:52 crc kubenswrapper[4780]: W1205 07:01:52.632978 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd0edece_4073_4ad7_8d5f_1a42fa0e9cf2.slice/crio-65ebeffea67b24bd4000a052084cf67b846474eed9ebcfc102cf442501225be0 WatchSource:0}: Error finding container 65ebeffea67b24bd4000a052084cf67b846474eed9ebcfc102cf442501225be0: Status 404 returned error can't find the container with id 65ebeffea67b24bd4000a052084cf67b846474eed9ebcfc102cf442501225be0 Dec 05 07:01:53 crc kubenswrapper[4780]: I1205 07:01:53.374519 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-pzxsq" event={"ID":"cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2","Type":"ContainerStarted","Data":"004d57db3bb2e5a2dbc7f1bd3d6e17c081966fa8372629419cbebb93bd77c323"} Dec 05 07:01:53 crc kubenswrapper[4780]: I1205 07:01:53.374981 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-pzxsq" event={"ID":"cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2","Type":"ContainerStarted","Data":"65ebeffea67b24bd4000a052084cf67b846474eed9ebcfc102cf442501225be0"} Dec 05 07:01:53 crc kubenswrapper[4780]: I1205 07:01:53.395402 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-pzxsq" podStartSLOduration=1.395385923 podStartE2EDuration="1.395385923s" podCreationTimestamp="2025-12-05 07:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:01:53.389700586 +0000 UTC m=+947.459216938" watchObservedRunningTime="2025-12-05 07:01:53.395385923 +0000 UTC m=+947.464902245" Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.001292 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vrclq"] Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.002747 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vrclq" Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.004909 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kjhhc" Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.005411 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.006777 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.014103 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vrclq"] Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.158736 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndqzg\" (UniqueName: \"kubernetes.io/projected/2c172be5-0ed7-443b-aac0-fe2ea04449bf-kube-api-access-ndqzg\") pod \"openstack-operator-index-vrclq\" (UID: \"2c172be5-0ed7-443b-aac0-fe2ea04449bf\") " pod="openstack-operators/openstack-operator-index-vrclq" Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.259702 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndqzg\" (UniqueName: \"kubernetes.io/projected/2c172be5-0ed7-443b-aac0-fe2ea04449bf-kube-api-access-ndqzg\") pod \"openstack-operator-index-vrclq\" (UID: \"2c172be5-0ed7-443b-aac0-fe2ea04449bf\") " pod="openstack-operators/openstack-operator-index-vrclq" Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.280850 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndqzg\" (UniqueName: \"kubernetes.io/projected/2c172be5-0ed7-443b-aac0-fe2ea04449bf-kube-api-access-ndqzg\") pod \"openstack-operator-index-vrclq\" (UID: \"2c172be5-0ed7-443b-aac0-fe2ea04449bf\") " pod="openstack-operators/openstack-operator-index-vrclq" Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.326422 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vrclq" Dec 05 07:02:02 crc kubenswrapper[4780]: I1205 07:02:02.773422 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vrclq"] Dec 05 07:02:03 crc kubenswrapper[4780]: I1205 07:02:03.432215 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vrclq" event={"ID":"2c172be5-0ed7-443b-aac0-fe2ea04449bf","Type":"ContainerStarted","Data":"72cb437fa34615dfb0b7031fdaacf9780bf5ddea208d0185846532b51d7bff69"} Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.361637 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vrclq"] Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.451705 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vrclq" event={"ID":"2c172be5-0ed7-443b-aac0-fe2ea04449bf","Type":"ContainerStarted","Data":"93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b"} Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.451913 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vrclq" podUID="2c172be5-0ed7-443b-aac0-fe2ea04449bf" containerName="registry-server" containerID="cri-o://93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b" gracePeriod=2 Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.476013 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vrclq" podStartSLOduration=1.998694049 podStartE2EDuration="4.475986823s" podCreationTimestamp="2025-12-05 07:02:01 +0000 UTC" firstStartedPulling="2025-12-05 07:02:02.785173247 +0000 UTC m=+956.854689579" lastFinishedPulling="2025-12-05 07:02:05.262466021 +0000 UTC m=+959.331982353" observedRunningTime="2025-12-05 07:02:05.469390383 +0000 UTC m=+959.538906725" watchObservedRunningTime="2025-12-05 07:02:05.475986823 +0000 UTC m=+959.545503155" Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.797953 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vrclq" Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.907899 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndqzg\" (UniqueName: \"kubernetes.io/projected/2c172be5-0ed7-443b-aac0-fe2ea04449bf-kube-api-access-ndqzg\") pod \"2c172be5-0ed7-443b-aac0-fe2ea04449bf\" (UID: \"2c172be5-0ed7-443b-aac0-fe2ea04449bf\") " Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.915548 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c172be5-0ed7-443b-aac0-fe2ea04449bf-kube-api-access-ndqzg" (OuterVolumeSpecName: "kube-api-access-ndqzg") pod "2c172be5-0ed7-443b-aac0-fe2ea04449bf" (UID: "2c172be5-0ed7-443b-aac0-fe2ea04449bf"). InnerVolumeSpecName "kube-api-access-ndqzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.970288 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2jm4g"] Dec 05 07:02:05 crc kubenswrapper[4780]: E1205 07:02:05.970728 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c172be5-0ed7-443b-aac0-fe2ea04449bf" containerName="registry-server" Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.970758 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c172be5-0ed7-443b-aac0-fe2ea04449bf" containerName="registry-server" Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.971055 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c172be5-0ed7-443b-aac0-fe2ea04449bf" containerName="registry-server" Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.972067 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2jm4g" Dec 05 07:02:05 crc kubenswrapper[4780]: I1205 07:02:05.974639 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2jm4g"] Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.009673 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh7l7\" (UniqueName: \"kubernetes.io/projected/7ec28974-de86-4b92-8635-e0ec75f5d605-kube-api-access-fh7l7\") pod \"openstack-operator-index-2jm4g\" (UID: \"7ec28974-de86-4b92-8635-e0ec75f5d605\") " pod="openstack-operators/openstack-operator-index-2jm4g" Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.009734 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndqzg\" (UniqueName: \"kubernetes.io/projected/2c172be5-0ed7-443b-aac0-fe2ea04449bf-kube-api-access-ndqzg\") on node \"crc\" DevicePath \"\"" Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.110313 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh7l7\" (UniqueName: \"kubernetes.io/projected/7ec28974-de86-4b92-8635-e0ec75f5d605-kube-api-access-fh7l7\") pod \"openstack-operator-index-2jm4g\" (UID: \"7ec28974-de86-4b92-8635-e0ec75f5d605\") " pod="openstack-operators/openstack-operator-index-2jm4g" Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.125683 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh7l7\" (UniqueName: \"kubernetes.io/projected/7ec28974-de86-4b92-8635-e0ec75f5d605-kube-api-access-fh7l7\") pod \"openstack-operator-index-2jm4g\" (UID: \"7ec28974-de86-4b92-8635-e0ec75f5d605\") " pod="openstack-operators/openstack-operator-index-2jm4g" Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.294755 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2jm4g" Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.460422 4780 generic.go:334] "Generic (PLEG): container finished" podID="2c172be5-0ed7-443b-aac0-fe2ea04449bf" containerID="93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b" exitCode=0 Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.460488 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vrclq" event={"ID":"2c172be5-0ed7-443b-aac0-fe2ea04449bf","Type":"ContainerDied","Data":"93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b"} Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.460536 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vrclq" event={"ID":"2c172be5-0ed7-443b-aac0-fe2ea04449bf","Type":"ContainerDied","Data":"72cb437fa34615dfb0b7031fdaacf9780bf5ddea208d0185846532b51d7bff69"} Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.460567 4780 scope.go:117] "RemoveContainer" containerID="93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b" Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.460694 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vrclq" Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.489922 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vrclq"] Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.493866 4780 scope.go:117] "RemoveContainer" containerID="93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b" Dec 05 07:02:06 crc kubenswrapper[4780]: E1205 07:02:06.494563 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b\": container with ID starting with 93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b not found: ID does not exist" containerID="93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b" Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.494598 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b"} err="failed to get container status \"93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b\": rpc error: code = NotFound desc = could not find container \"93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b\": container with ID starting with 93f79ce7c2e8cde4acbbcceede2349077d28db1ddaebc000108c9365ca3d830b not found: ID does not exist" Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.495992 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vrclq"] Dec 05 07:02:06 crc kubenswrapper[4780]: I1205 07:02:06.746265 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2jm4g"] Dec 05 07:02:07 crc kubenswrapper[4780]: I1205 07:02:07.466537 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2jm4g" event={"ID":"7ec28974-de86-4b92-8635-e0ec75f5d605","Type":"ContainerStarted","Data":"3978dc6cc3fe496fe3ac49c7764b489b2d25073de5f8b87b749a289da2fa2a37"} Dec 05 07:02:07 crc kubenswrapper[4780]: I1205 07:02:07.466748 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2jm4g" event={"ID":"7ec28974-de86-4b92-8635-e0ec75f5d605","Type":"ContainerStarted","Data":"58ff8d02e6be6c37b112e27e81a527e1d320c08c495831664a6a0cfdf0677805"} Dec 05 07:02:07 crc kubenswrapper[4780]: I1205 07:02:07.481850 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2jm4g" podStartSLOduration=2.053145294 podStartE2EDuration="2.481831231s" podCreationTimestamp="2025-12-05 07:02:05 +0000 UTC" firstStartedPulling="2025-12-05 07:02:06.771097658 +0000 UTC m=+960.840613990" lastFinishedPulling="2025-12-05 07:02:07.199783595 +0000 UTC m=+961.269299927" observedRunningTime="2025-12-05 07:02:07.478785372 +0000 UTC m=+961.548301724" watchObservedRunningTime="2025-12-05 07:02:07.481831231 +0000 UTC m=+961.551347573" Dec 05 07:02:08 crc kubenswrapper[4780]: I1205 07:02:08.159041 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c172be5-0ed7-443b-aac0-fe2ea04449bf" path="/var/lib/kubelet/pods/2c172be5-0ed7-443b-aac0-fe2ea04449bf/volumes" Dec 05 07:02:16 crc kubenswrapper[4780]: I1205 07:02:16.295582 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2jm4g" Dec 05 07:02:16 crc kubenswrapper[4780]: I1205 07:02:16.296230 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2jm4g" Dec 05 07:02:16 crc kubenswrapper[4780]: I1205 07:02:16.324821 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2jm4g" Dec 05 07:02:16 crc kubenswrapper[4780]: I1205 07:02:16.543356 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2jm4g" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.195600 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn"] Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.196778 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.215858 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn"] Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.229677 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mpscf" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.250363 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-bundle\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.250451 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8f9b\" (UniqueName: \"kubernetes.io/projected/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-kube-api-access-q8f9b\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.250495 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-util\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.351969 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-bundle\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.352063 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8f9b\" (UniqueName: \"kubernetes.io/projected/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-kube-api-access-q8f9b\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.352109 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-util\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.352719 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-util\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.352764 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-bundle\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.368699 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8f9b\" (UniqueName: \"kubernetes.io/projected/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-kube-api-access-q8f9b\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.517434 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:17 crc kubenswrapper[4780]: I1205 07:02:17.987087 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn"] Dec 05 07:02:18 crc kubenswrapper[4780]: I1205 07:02:18.530720 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" event={"ID":"823cbb49-bcc2-45fe-9bbf-505a61b7eecc","Type":"ContainerStarted","Data":"ae528341800e3f23c6fde221ca90833bc6d44da12f73604d57b31c55b2d2e15f"} Dec 05 07:02:19 crc kubenswrapper[4780]: I1205 07:02:19.537695 4780 generic.go:334] "Generic (PLEG): container finished" podID="823cbb49-bcc2-45fe-9bbf-505a61b7eecc" containerID="aa45709c4ac23f3cec5246bc8300d562ccfba17a63269d4af33336d664c56638" exitCode=0 Dec 05 07:02:19 crc kubenswrapper[4780]: I1205 07:02:19.537799 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" event={"ID":"823cbb49-bcc2-45fe-9bbf-505a61b7eecc","Type":"ContainerDied","Data":"aa45709c4ac23f3cec5246bc8300d562ccfba17a63269d4af33336d664c56638"} Dec 05 07:02:20 crc kubenswrapper[4780]: I1205 07:02:20.545041 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" event={"ID":"823cbb49-bcc2-45fe-9bbf-505a61b7eecc","Type":"ContainerStarted","Data":"9bd2132fbb601df9d35729d6258f2ead93293abf9e00adb21af172b9c8d1f526"} Dec 05 07:02:21 crc kubenswrapper[4780]: I1205 07:02:21.554677 4780 generic.go:334] "Generic (PLEG): container finished" podID="823cbb49-bcc2-45fe-9bbf-505a61b7eecc" containerID="9bd2132fbb601df9d35729d6258f2ead93293abf9e00adb21af172b9c8d1f526" exitCode=0 Dec 05 07:02:21 crc kubenswrapper[4780]: I1205 07:02:21.554719 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" event={"ID":"823cbb49-bcc2-45fe-9bbf-505a61b7eecc","Type":"ContainerDied","Data":"9bd2132fbb601df9d35729d6258f2ead93293abf9e00adb21af172b9c8d1f526"} Dec 05 07:02:22 crc kubenswrapper[4780]: I1205 07:02:22.569480 4780 generic.go:334] "Generic (PLEG): container finished" podID="823cbb49-bcc2-45fe-9bbf-505a61b7eecc" containerID="30ffa4b15b16db2439d0fa2a44e4fc2d191ffc114ea9a9d61133461da40e9d8e" exitCode=0 Dec 05 07:02:22 crc kubenswrapper[4780]: I1205 07:02:22.569546 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" event={"ID":"823cbb49-bcc2-45fe-9bbf-505a61b7eecc","Type":"ContainerDied","Data":"30ffa4b15b16db2439d0fa2a44e4fc2d191ffc114ea9a9d61133461da40e9d8e"} Dec 05 07:02:23 crc kubenswrapper[4780]: I1205 07:02:23.828525 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:23 crc kubenswrapper[4780]: I1205 07:02:23.935515 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8f9b\" (UniqueName: \"kubernetes.io/projected/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-kube-api-access-q8f9b\") pod \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " Dec 05 07:02:23 crc kubenswrapper[4780]: I1205 07:02:23.935609 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-bundle\") pod \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " Dec 05 07:02:23 crc kubenswrapper[4780]: I1205 07:02:23.935669 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-util\") pod \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\" (UID: \"823cbb49-bcc2-45fe-9bbf-505a61b7eecc\") " Dec 05 07:02:23 crc kubenswrapper[4780]: I1205 07:02:23.936501 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-bundle" (OuterVolumeSpecName: "bundle") pod "823cbb49-bcc2-45fe-9bbf-505a61b7eecc" (UID: "823cbb49-bcc2-45fe-9bbf-505a61b7eecc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:02:23 crc kubenswrapper[4780]: I1205 07:02:23.941311 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-kube-api-access-q8f9b" (OuterVolumeSpecName: "kube-api-access-q8f9b") pod "823cbb49-bcc2-45fe-9bbf-505a61b7eecc" (UID: "823cbb49-bcc2-45fe-9bbf-505a61b7eecc"). InnerVolumeSpecName "kube-api-access-q8f9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:02:23 crc kubenswrapper[4780]: I1205 07:02:23.956123 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-util" (OuterVolumeSpecName: "util") pod "823cbb49-bcc2-45fe-9bbf-505a61b7eecc" (UID: "823cbb49-bcc2-45fe-9bbf-505a61b7eecc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:02:24 crc kubenswrapper[4780]: I1205 07:02:24.036566 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8f9b\" (UniqueName: \"kubernetes.io/projected/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-kube-api-access-q8f9b\") on node \"crc\" DevicePath \"\"" Dec 05 07:02:24 crc kubenswrapper[4780]: I1205 07:02:24.036603 4780 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:02:24 crc kubenswrapper[4780]: I1205 07:02:24.036614 4780 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/823cbb49-bcc2-45fe-9bbf-505a61b7eecc-util\") on node \"crc\" DevicePath \"\"" Dec 05 07:02:24 crc kubenswrapper[4780]: I1205 07:02:24.583866 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" event={"ID":"823cbb49-bcc2-45fe-9bbf-505a61b7eecc","Type":"ContainerDied","Data":"ae528341800e3f23c6fde221ca90833bc6d44da12f73604d57b31c55b2d2e15f"} Dec 05 07:02:24 crc kubenswrapper[4780]: I1205 07:02:24.583928 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae528341800e3f23c6fde221ca90833bc6d44da12f73604d57b31c55b2d2e15f" Dec 05 07:02:24 crc kubenswrapper[4780]: I1205 07:02:24.583962 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.689698 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2"] Dec 05 07:02:29 crc kubenswrapper[4780]: E1205 07:02:29.690456 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="823cbb49-bcc2-45fe-9bbf-505a61b7eecc" containerName="pull" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.690477 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="823cbb49-bcc2-45fe-9bbf-505a61b7eecc" containerName="pull" Dec 05 07:02:29 crc kubenswrapper[4780]: E1205 07:02:29.690500 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="823cbb49-bcc2-45fe-9bbf-505a61b7eecc" containerName="util" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.690508 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="823cbb49-bcc2-45fe-9bbf-505a61b7eecc" containerName="util" Dec 05 07:02:29 crc kubenswrapper[4780]: E1205 07:02:29.690517 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="823cbb49-bcc2-45fe-9bbf-505a61b7eecc" containerName="extract" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.690525 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="823cbb49-bcc2-45fe-9bbf-505a61b7eecc" containerName="extract" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.690654 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="823cbb49-bcc2-45fe-9bbf-505a61b7eecc" containerName="extract" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.691102 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.693139 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-m455s" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.754334 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2"] Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.806076 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4cg5\" (UniqueName: \"kubernetes.io/projected/f138461d-58f8-48a4-a936-f5295892bdcd-kube-api-access-g4cg5\") pod \"openstack-operator-controller-operator-55b6fb9447-qt9q2\" (UID: \"f138461d-58f8-48a4-a936-f5295892bdcd\") " pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.907643 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4cg5\" (UniqueName: \"kubernetes.io/projected/f138461d-58f8-48a4-a936-f5295892bdcd-kube-api-access-g4cg5\") pod \"openstack-operator-controller-operator-55b6fb9447-qt9q2\" (UID: \"f138461d-58f8-48a4-a936-f5295892bdcd\") " pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.908004 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.908078 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:02:29 crc kubenswrapper[4780]: I1205 07:02:29.930566 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4cg5\" (UniqueName: \"kubernetes.io/projected/f138461d-58f8-48a4-a936-f5295892bdcd-kube-api-access-g4cg5\") pod \"openstack-operator-controller-operator-55b6fb9447-qt9q2\" (UID: \"f138461d-58f8-48a4-a936-f5295892bdcd\") " pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2" Dec 05 07:02:30 crc kubenswrapper[4780]: I1205 07:02:30.033337 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2" Dec 05 07:02:30 crc kubenswrapper[4780]: I1205 07:02:30.465322 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2"] Dec 05 07:02:30 crc kubenswrapper[4780]: I1205 07:02:30.619028 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2" event={"ID":"f138461d-58f8-48a4-a936-f5295892bdcd","Type":"ContainerStarted","Data":"cc9beccf96a9684d1d5a8752692acf463cfbe48d7ee7c528c6816c235f672a39"} Dec 05 07:02:35 crc kubenswrapper[4780]: I1205 07:02:35.648384 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2" event={"ID":"f138461d-58f8-48a4-a936-f5295892bdcd","Type":"ContainerStarted","Data":"66ec4963992e4e043b882dc5f4b56957f8dc955b80cec9aa86f9cc31afac3da2"} Dec 05 07:02:35 crc kubenswrapper[4780]: I1205 07:02:35.648963 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2" Dec 05 07:02:35 crc kubenswrapper[4780]: I1205 07:02:35.682024 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2" podStartSLOduration=1.7047270559999999 podStartE2EDuration="6.682007019s" podCreationTimestamp="2025-12-05 07:02:29 +0000 UTC" firstStartedPulling="2025-12-05 07:02:30.476098816 +0000 UTC m=+984.545615148" lastFinishedPulling="2025-12-05 07:02:35.453378779 +0000 UTC m=+989.522895111" observedRunningTime="2025-12-05 07:02:35.67711301 +0000 UTC m=+989.746629362" watchObservedRunningTime="2025-12-05 07:02:35.682007019 +0000 UTC m=+989.751523351" Dec 05 07:02:40 crc kubenswrapper[4780]: I1205 07:02:40.036177 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-qt9q2" Dec 05 07:02:58 crc kubenswrapper[4780]: I1205 07:02:58.971568 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f"] Dec 05 07:02:58 crc kubenswrapper[4780]: I1205 07:02:58.973126 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" Dec 05 07:02:58 crc kubenswrapper[4780]: I1205 07:02:58.974809 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dzhvt" Dec 05 07:02:58 crc kubenswrapper[4780]: I1205 07:02:58.981364 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff"] Dec 05 07:02:58 crc kubenswrapper[4780]: I1205 07:02:58.982700 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" Dec 05 07:02:58 crc kubenswrapper[4780]: I1205 07:02:58.985293 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-q7nbm" Dec 05 07:02:58 crc kubenswrapper[4780]: I1205 07:02:58.988315 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f"] Dec 05 07:02:58 crc kubenswrapper[4780]: I1205 07:02:58.996141 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.020543 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.021621 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.023477 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-729ch" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.032285 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.034078 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.037214 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-g4cbd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.067536 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.075244 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.076268 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.080471 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mq4qd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.088242 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.101183 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.102147 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.104848 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-l479l" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.121940 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.132534 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66xpj\" (UniqueName: \"kubernetes.io/projected/d413e91e-0735-412e-8614-bd86a466267b-kube-api-access-66xpj\") pod \"cinder-operator-controller-manager-859b6ccc6-7fj5f\" (UID: \"d413e91e-0735-412e-8614-bd86a466267b\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.132591 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g52lc\" (UniqueName: \"kubernetes.io/projected/12b890b6-660a-4a2f-a2cf-2cbce76cafc6-kube-api-access-g52lc\") pod \"glance-operator-controller-manager-77987cd8cd-kf66r\" (UID: \"12b890b6-660a-4a2f-a2cf-2cbce76cafc6\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.132630 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfzml\" (UniqueName: \"kubernetes.io/projected/d3c6c892-e943-45a9-bda7-63fbae6bc3c1-kube-api-access-wfzml\") pod \"designate-operator-controller-manager-78b4bc895b-4vfh9\" (UID: \"d3c6c892-e943-45a9-bda7-63fbae6bc3c1\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.132649 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj8hf\" (UniqueName: \"kubernetes.io/projected/e4d45329-536a-48cf-932c-22669f486a7c-kube-api-access-nj8hf\") pod \"barbican-operator-controller-manager-7d9dfd778-vjqff\" (UID: \"e4d45329-536a-48cf-932c-22669f486a7c\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.132764 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.141813 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.142999 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.147649 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-h65jc" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.148020 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.148917 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.149733 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.150084 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.154041 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.155636 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xn8mz" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.155825 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6d726" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.155966 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.171333 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.182643 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.188945 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.190264 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.195026 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.196542 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-6w8ls" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.198217 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.200161 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-smwp5" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.207506 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.224860 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.227183 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.230201 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9wthn" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.234234 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfzml\" (UniqueName: \"kubernetes.io/projected/d3c6c892-e943-45a9-bda7-63fbae6bc3c1-kube-api-access-wfzml\") pod \"designate-operator-controller-manager-78b4bc895b-4vfh9\" (UID: \"d3c6c892-e943-45a9-bda7-63fbae6bc3c1\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.234318 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj8hf\" (UniqueName: \"kubernetes.io/projected/e4d45329-536a-48cf-932c-22669f486a7c-kube-api-access-nj8hf\") pod \"barbican-operator-controller-manager-7d9dfd778-vjqff\" (UID: \"e4d45329-536a-48cf-932c-22669f486a7c\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.234351 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxfh\" (UniqueName: \"kubernetes.io/projected/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-kube-api-access-7dxfh\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.234370 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.234441 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqr6p\" (UniqueName: \"kubernetes.io/projected/fb14f9a1-3b2e-4c17-a750-b1188fff5b40-kube-api-access-gqr6p\") pod \"heat-operator-controller-manager-5f64f6f8bb-9zlhl\" (UID: \"fb14f9a1-3b2e-4c17-a750-b1188fff5b40\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.234468 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpthf\" (UniqueName: \"kubernetes.io/projected/8450df13-5e1a-4f4e-86dc-b1c841845554-kube-api-access-bpthf\") pod \"ironic-operator-controller-manager-6c548fd776-7f7qk\" (UID: \"8450df13-5e1a-4f4e-86dc-b1c841845554\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.234501 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drthd\" (UniqueName: \"kubernetes.io/projected/b819f602-ddd5-4a16-b998-6d7d78798681-kube-api-access-drthd\") pod \"keystone-operator-controller-manager-7765d96ddf-5pn44\" (UID: \"b819f602-ddd5-4a16-b998-6d7d78798681\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.234531 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66xpj\" (UniqueName: \"kubernetes.io/projected/d413e91e-0735-412e-8614-bd86a466267b-kube-api-access-66xpj\") pod \"cinder-operator-controller-manager-859b6ccc6-7fj5f\" (UID: \"d413e91e-0735-412e-8614-bd86a466267b\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.234575 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g52lc\" (UniqueName: \"kubernetes.io/projected/12b890b6-660a-4a2f-a2cf-2cbce76cafc6-kube-api-access-g52lc\") pod \"glance-operator-controller-manager-77987cd8cd-kf66r\" (UID: \"12b890b6-660a-4a2f-a2cf-2cbce76cafc6\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.234599 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j78z\" (UniqueName: \"kubernetes.io/projected/2f86d305-a39d-42ec-9a73-067610752615-kube-api-access-2j78z\") pod \"horizon-operator-controller-manager-68c6d99b8f-6j9rd\" (UID: \"2f86d305-a39d-42ec-9a73-067610752615\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.256172 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.262316 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66xpj\" (UniqueName: \"kubernetes.io/projected/d413e91e-0735-412e-8614-bd86a466267b-kube-api-access-66xpj\") pod \"cinder-operator-controller-manager-859b6ccc6-7fj5f\" (UID: \"d413e91e-0735-412e-8614-bd86a466267b\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.262745 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj8hf\" (UniqueName: \"kubernetes.io/projected/e4d45329-536a-48cf-932c-22669f486a7c-kube-api-access-nj8hf\") pod \"barbican-operator-controller-manager-7d9dfd778-vjqff\" (UID: \"e4d45329-536a-48cf-932c-22669f486a7c\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.274412 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g52lc\" (UniqueName: \"kubernetes.io/projected/12b890b6-660a-4a2f-a2cf-2cbce76cafc6-kube-api-access-g52lc\") pod \"glance-operator-controller-manager-77987cd8cd-kf66r\" (UID: \"12b890b6-660a-4a2f-a2cf-2cbce76cafc6\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.279373 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfzml\" (UniqueName: \"kubernetes.io/projected/d3c6c892-e943-45a9-bda7-63fbae6bc3c1-kube-api-access-wfzml\") pod \"designate-operator-controller-manager-78b4bc895b-4vfh9\" (UID: \"d3c6c892-e943-45a9-bda7-63fbae6bc3c1\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.294352 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.322906 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.324935 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.329953 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.338240 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.338692 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.338707 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-zjlx2" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.352514 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqr6p\" (UniqueName: \"kubernetes.io/projected/fb14f9a1-3b2e-4c17-a750-b1188fff5b40-kube-api-access-gqr6p\") pod \"heat-operator-controller-manager-5f64f6f8bb-9zlhl\" (UID: \"fb14f9a1-3b2e-4c17-a750-b1188fff5b40\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.352579 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpthf\" (UniqueName: \"kubernetes.io/projected/8450df13-5e1a-4f4e-86dc-b1c841845554-kube-api-access-bpthf\") pod \"ironic-operator-controller-manager-6c548fd776-7f7qk\" (UID: \"8450df13-5e1a-4f4e-86dc-b1c841845554\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.352608 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drthd\" (UniqueName: \"kubernetes.io/projected/b819f602-ddd5-4a16-b998-6d7d78798681-kube-api-access-drthd\") pod \"keystone-operator-controller-manager-7765d96ddf-5pn44\" (UID: \"b819f602-ddd5-4a16-b998-6d7d78798681\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.352633 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h66n\" (UniqueName: \"kubernetes.io/projected/972a9e29-9c48-4d8e-9390-e91c9b422af8-kube-api-access-6h66n\") pod \"manila-operator-controller-manager-7c79b5df47-mmn9w\" (UID: \"972a9e29-9c48-4d8e-9390-e91c9b422af8\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.352692 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j78z\" (UniqueName: \"kubernetes.io/projected/2f86d305-a39d-42ec-9a73-067610752615-kube-api-access-2j78z\") pod \"horizon-operator-controller-manager-68c6d99b8f-6j9rd\" (UID: \"2f86d305-a39d-42ec-9a73-067610752615\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.352744 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dxfh\" (UniqueName: \"kubernetes.io/projected/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-kube-api-access-7dxfh\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.352765 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.352809 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7d5\" (UniqueName: \"kubernetes.io/projected/7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9-kube-api-access-7n7d5\") pod \"mariadb-operator-controller-manager-56bbcc9d85-2nh4k\" (UID: \"7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.352834 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sg44\" (UniqueName: \"kubernetes.io/projected/1f3fb0ee-0381-48df-91b7-1a72bf5acd62-kube-api-access-8sg44\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-pw8g2\" (UID: \"1f3fb0ee-0381-48df-91b7-1a72bf5acd62\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" Dec 05 07:02:59 crc kubenswrapper[4780]: E1205 07:02:59.353767 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 07:02:59 crc kubenswrapper[4780]: E1205 07:02:59.353850 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert podName:edd42acf-bc82-40b8-bae3-c8ce3f8dcd54 nodeName:}" failed. No retries permitted until 2025-12-05 07:02:59.853830491 +0000 UTC m=+1013.923346823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert") pod "infra-operator-controller-manager-57548d458d-9ntxt" (UID: "edd42acf-bc82-40b8-bae3-c8ce3f8dcd54") : secret "infra-operator-webhook-server-cert" not found Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.369848 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.371968 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.384503 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-wwrrg" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.394194 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.398658 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqr6p\" (UniqueName: \"kubernetes.io/projected/fb14f9a1-3b2e-4c17-a750-b1188fff5b40-kube-api-access-gqr6p\") pod \"heat-operator-controller-manager-5f64f6f8bb-9zlhl\" (UID: \"fb14f9a1-3b2e-4c17-a750-b1188fff5b40\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.405033 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dxfh\" (UniqueName: \"kubernetes.io/projected/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-kube-api-access-7dxfh\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.405756 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j78z\" (UniqueName: \"kubernetes.io/projected/2f86d305-a39d-42ec-9a73-067610752615-kube-api-access-2j78z\") pod \"horizon-operator-controller-manager-68c6d99b8f-6j9rd\" (UID: \"2f86d305-a39d-42ec-9a73-067610752615\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.406318 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.406790 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpthf\" (UniqueName: \"kubernetes.io/projected/8450df13-5e1a-4f4e-86dc-b1c841845554-kube-api-access-bpthf\") pod \"ironic-operator-controller-manager-6c548fd776-7f7qk\" (UID: \"8450df13-5e1a-4f4e-86dc-b1c841845554\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.409844 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drthd\" (UniqueName: \"kubernetes.io/projected/b819f602-ddd5-4a16-b998-6d7d78798681-kube-api-access-drthd\") pod \"keystone-operator-controller-manager-7765d96ddf-5pn44\" (UID: \"b819f602-ddd5-4a16-b998-6d7d78798681\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.417133 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.418018 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.435302 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.457623 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7d5\" (UniqueName: \"kubernetes.io/projected/7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9-kube-api-access-7n7d5\") pod \"mariadb-operator-controller-manager-56bbcc9d85-2nh4k\" (UID: \"7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.457692 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sg44\" (UniqueName: \"kubernetes.io/projected/1f3fb0ee-0381-48df-91b7-1a72bf5acd62-kube-api-access-8sg44\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-pw8g2\" (UID: \"1f3fb0ee-0381-48df-91b7-1a72bf5acd62\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.457780 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckkr\" (UniqueName: \"kubernetes.io/projected/063c7b0a-5211-4356-9452-e55deeeeb834-kube-api-access-tckkr\") pod \"nova-operator-controller-manager-697bc559fc-7plfw\" (UID: \"063c7b0a-5211-4356-9452-e55deeeeb834\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.457826 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h66n\" (UniqueName: \"kubernetes.io/projected/972a9e29-9c48-4d8e-9390-e91c9b422af8-kube-api-access-6h66n\") pod \"manila-operator-controller-manager-7c79b5df47-mmn9w\" (UID: \"972a9e29-9c48-4d8e-9390-e91c9b422af8\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.464673 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.468966 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.477060 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tqrw6" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.477239 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.482583 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7d5\" (UniqueName: \"kubernetes.io/projected/7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9-kube-api-access-7n7d5\") pod \"mariadb-operator-controller-manager-56bbcc9d85-2nh4k\" (UID: \"7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.485927 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.497411 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.498391 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.498768 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.499332 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.499408 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.500001 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.502598 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xn2dv" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.502794 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mjp28" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.505090 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h66n\" (UniqueName: \"kubernetes.io/projected/972a9e29-9c48-4d8e-9390-e91c9b422af8-kube-api-access-6h66n\") pod \"manila-operator-controller-manager-7c79b5df47-mmn9w\" (UID: \"972a9e29-9c48-4d8e-9390-e91c9b422af8\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.519206 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.524530 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.527944 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.535326 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sg44\" (UniqueName: \"kubernetes.io/projected/1f3fb0ee-0381-48df-91b7-1a72bf5acd62-kube-api-access-8sg44\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-pw8g2\" (UID: \"1f3fb0ee-0381-48df-91b7-1a72bf5acd62\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.546928 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.566939 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.568093 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.569188 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4697\" (UniqueName: \"kubernetes.io/projected/891b32d7-a00e-4aee-b1a9-11a17e231cf1-kube-api-access-r4697\") pod \"ovn-operator-controller-manager-b6456fdb6-xbdxn\" (UID: \"891b32d7-a00e-4aee-b1a9-11a17e231cf1\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.569218 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttmtb\" (UniqueName: \"kubernetes.io/projected/4f817d18-c711-48ff-891e-f5f59fe1ec5f-kube-api-access-ttmtb\") pod \"placement-operator-controller-manager-78f8948974-n7pzh\" (UID: \"4f817d18-c711-48ff-891e-f5f59fe1ec5f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.569317 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tckkr\" (UniqueName: \"kubernetes.io/projected/063c7b0a-5211-4356-9452-e55deeeeb834-kube-api-access-tckkr\") pod \"nova-operator-controller-manager-697bc559fc-7plfw\" (UID: \"063c7b0a-5211-4356-9452-e55deeeeb834\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.569347 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqfqt\" (UniqueName: \"kubernetes.io/projected/7d2e66f9-622c-4723-abd6-d1d9689ac660-kube-api-access-fqfqt\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.569369 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.569406 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g52q5\" (UniqueName: \"kubernetes.io/projected/c032b9cc-5da5-4011-a397-e564fedcf04d-kube-api-access-g52q5\") pod \"octavia-operator-controller-manager-998648c74-jfzgd\" (UID: \"c032b9cc-5da5-4011-a397-e564fedcf04d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.569848 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-d5gr4" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.570106 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.598530 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.599171 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tckkr\" (UniqueName: \"kubernetes.io/projected/063c7b0a-5211-4356-9452-e55deeeeb834-kube-api-access-tckkr\") pod \"nova-operator-controller-manager-697bc559fc-7plfw\" (UID: \"063c7b0a-5211-4356-9452-e55deeeeb834\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.606680 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.607754 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.610813 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dplk8" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.611063 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.647665 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-gsggt"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.650034 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.653211 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9l2hp" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.666353 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-gsggt"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.678732 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhd77\" (UniqueName: \"kubernetes.io/projected/b2f9a2dc-4b04-4209-a427-1467873d3d19-kube-api-access-qhd77\") pod \"swift-operator-controller-manager-5f8c65bbfc-sl2j8\" (UID: \"b2f9a2dc-4b04-4209-a427-1467873d3d19\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.678791 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqfqt\" (UniqueName: \"kubernetes.io/projected/7d2e66f9-622c-4723-abd6-d1d9689ac660-kube-api-access-fqfqt\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.678821 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.678873 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g52q5\" (UniqueName: \"kubernetes.io/projected/c032b9cc-5da5-4011-a397-e564fedcf04d-kube-api-access-g52q5\") pod \"octavia-operator-controller-manager-998648c74-jfzgd\" (UID: \"c032b9cc-5da5-4011-a397-e564fedcf04d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.678922 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4697\" (UniqueName: \"kubernetes.io/projected/891b32d7-a00e-4aee-b1a9-11a17e231cf1-kube-api-access-r4697\") pod \"ovn-operator-controller-manager-b6456fdb6-xbdxn\" (UID: \"891b32d7-a00e-4aee-b1a9-11a17e231cf1\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.679034 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttmtb\" (UniqueName: \"kubernetes.io/projected/4f817d18-c711-48ff-891e-f5f59fe1ec5f-kube-api-access-ttmtb\") pod \"placement-operator-controller-manager-78f8948974-n7pzh\" (UID: \"4f817d18-c711-48ff-891e-f5f59fe1ec5f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" Dec 05 07:02:59 crc kubenswrapper[4780]: E1205 07:02:59.680061 4780 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 07:02:59 crc kubenswrapper[4780]: E1205 07:02:59.680151 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert podName:7d2e66f9-622c-4723-abd6-d1d9689ac660 nodeName:}" failed. No retries permitted until 2025-12-05 07:03:00.180132663 +0000 UTC m=+1014.249648995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" (UID: "7d2e66f9-622c-4723-abd6-d1d9689ac660") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.684602 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8tqw\" (UniqueName: \"kubernetes.io/projected/0504ce62-63ec-4224-883e-495a8de219a6-kube-api-access-z8tqw\") pod \"telemetry-operator-controller-manager-76cc84c6bb-r5j4j\" (UID: \"0504ce62-63ec-4224-883e-495a8de219a6\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.684699 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh59w\" (UniqueName: \"kubernetes.io/projected/1af7f32d-6c1b-4ed2-8511-4ca770bba111-kube-api-access-zh59w\") pod \"test-operator-controller-manager-5854674fcc-gsggt\" (UID: \"1af7f32d-6c1b-4ed2-8511-4ca770bba111\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.745961 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.747081 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.750178 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.751420 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-4trs6" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.753920 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.753943 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4697\" (UniqueName: \"kubernetes.io/projected/891b32d7-a00e-4aee-b1a9-11a17e231cf1-kube-api-access-r4697\") pod \"ovn-operator-controller-manager-b6456fdb6-xbdxn\" (UID: \"891b32d7-a00e-4aee-b1a9-11a17e231cf1\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.758154 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttmtb\" (UniqueName: \"kubernetes.io/projected/4f817d18-c711-48ff-891e-f5f59fe1ec5f-kube-api-access-ttmtb\") pod \"placement-operator-controller-manager-78f8948974-n7pzh\" (UID: \"4f817d18-c711-48ff-891e-f5f59fe1ec5f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.771600 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqfqt\" (UniqueName: \"kubernetes.io/projected/7d2e66f9-622c-4723-abd6-d1d9689ac660-kube-api-access-fqfqt\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.778177 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g52q5\" (UniqueName: \"kubernetes.io/projected/c032b9cc-5da5-4011-a397-e564fedcf04d-kube-api-access-g52q5\") pod \"octavia-operator-controller-manager-998648c74-jfzgd\" (UID: \"c032b9cc-5da5-4011-a397-e564fedcf04d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.783475 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.797539 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8tqw\" (UniqueName: \"kubernetes.io/projected/0504ce62-63ec-4224-883e-495a8de219a6-kube-api-access-z8tqw\") pod \"telemetry-operator-controller-manager-76cc84c6bb-r5j4j\" (UID: \"0504ce62-63ec-4224-883e-495a8de219a6\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.797596 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrfn7\" (UniqueName: \"kubernetes.io/projected/9bf061f5-1016-4813-aad6-b50350f6a1c5-kube-api-access-nrfn7\") pod \"watcher-operator-controller-manager-769dc69bc-dr6pg\" (UID: \"9bf061f5-1016-4813-aad6-b50350f6a1c5\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.797634 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh59w\" (UniqueName: \"kubernetes.io/projected/1af7f32d-6c1b-4ed2-8511-4ca770bba111-kube-api-access-zh59w\") pod \"test-operator-controller-manager-5854674fcc-gsggt\" (UID: \"1af7f32d-6c1b-4ed2-8511-4ca770bba111\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.797665 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhd77\" (UniqueName: \"kubernetes.io/projected/b2f9a2dc-4b04-4209-a427-1467873d3d19-kube-api-access-qhd77\") pod \"swift-operator-controller-manager-5f8c65bbfc-sl2j8\" (UID: \"b2f9a2dc-4b04-4209-a427-1467873d3d19\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.836437 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh59w\" (UniqueName: \"kubernetes.io/projected/1af7f32d-6c1b-4ed2-8511-4ca770bba111-kube-api-access-zh59w\") pod \"test-operator-controller-manager-5854674fcc-gsggt\" (UID: \"1af7f32d-6c1b-4ed2-8511-4ca770bba111\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.837153 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8tqw\" (UniqueName: \"kubernetes.io/projected/0504ce62-63ec-4224-883e-495a8de219a6-kube-api-access-z8tqw\") pod \"telemetry-operator-controller-manager-76cc84c6bb-r5j4j\" (UID: \"0504ce62-63ec-4224-883e-495a8de219a6\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.837770 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhd77\" (UniqueName: \"kubernetes.io/projected/b2f9a2dc-4b04-4209-a427-1467873d3d19-kube-api-access-qhd77\") pod \"swift-operator-controller-manager-5f8c65bbfc-sl2j8\" (UID: \"b2f9a2dc-4b04-4209-a427-1467873d3d19\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.856721 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.858029 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.860184 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.862383 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-djxvt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.862472 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.882722 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.895573 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.904503 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.904567 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.904610 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jclcb\" (UniqueName: \"kubernetes.io/projected/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-kube-api-access-jclcb\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.904670 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrfn7\" (UniqueName: \"kubernetes.io/projected/9bf061f5-1016-4813-aad6-b50350f6a1c5-kube-api-access-nrfn7\") pod \"watcher-operator-controller-manager-769dc69bc-dr6pg\" (UID: \"9bf061f5-1016-4813-aad6-b50350f6a1c5\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" Dec 05 07:02:59 crc kubenswrapper[4780]: E1205 07:02:59.904828 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.907116 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:02:59 crc kubenswrapper[4780]: E1205 07:02:59.907546 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert podName:edd42acf-bc82-40b8-bae3-c8ce3f8dcd54 nodeName:}" failed. No retries permitted until 2025-12-05 07:03:00.907509531 +0000 UTC m=+1014.977025863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert") pod "infra-operator-controller-manager-57548d458d-9ntxt" (UID: "edd42acf-bc82-40b8-bae3-c8ce3f8dcd54") : secret "infra-operator-webhook-server-cert" not found Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.908028 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.909074 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.911271 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.927311 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.928525 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrfn7\" (UniqueName: \"kubernetes.io/projected/9bf061f5-1016-4813-aad6-b50350f6a1c5-kube-api-access-nrfn7\") pod \"watcher-operator-controller-manager-769dc69bc-dr6pg\" (UID: \"9bf061f5-1016-4813-aad6-b50350f6a1c5\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.936247 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.937435 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.941127 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-4dpk8" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.942644 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.955003 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v"] Dec 05 07:02:59 crc kubenswrapper[4780]: I1205 07:02:59.977730 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.017516 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.017588 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwg46\" (UniqueName: \"kubernetes.io/projected/908cf347-4346-4eb1-996f-b214491207e0-kube-api-access-vwg46\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xmn7v\" (UID: \"908cf347-4346-4eb1-996f-b214491207e0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.017638 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.017668 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jclcb\" (UniqueName: \"kubernetes.io/projected/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-kube-api-access-jclcb\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.017965 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.018016 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs podName:02c9c751-c299-4ff8-9c2d-200aae3ea2ba nodeName:}" failed. No retries permitted until 2025-12-05 07:03:00.51800019 +0000 UTC m=+1014.587516522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-zslsd" (UID: "02c9c751-c299-4ff8-9c2d-200aae3ea2ba") : secret "metrics-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.018191 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.018382 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs podName:02c9c751-c299-4ff8-9c2d-200aae3ea2ba nodeName:}" failed. No retries permitted until 2025-12-05 07:03:00.51835815 +0000 UTC m=+1014.587874482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-zslsd" (UID: "02c9c751-c299-4ff8-9c2d-200aae3ea2ba") : secret "webhook-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.045618 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jclcb\" (UniqueName: \"kubernetes.io/projected/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-kube-api-access-jclcb\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.076542 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.117849 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.118825 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwg46\" (UniqueName: \"kubernetes.io/projected/908cf347-4346-4eb1-996f-b214491207e0-kube-api-access-vwg46\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xmn7v\" (UID: \"908cf347-4346-4eb1-996f-b214491207e0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.147244 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwg46\" (UniqueName: \"kubernetes.io/projected/908cf347-4346-4eb1-996f-b214491207e0-kube-api-access-vwg46\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xmn7v\" (UID: \"908cf347-4346-4eb1-996f-b214491207e0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.220741 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.220953 4780 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.221117 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert podName:7d2e66f9-622c-4723-abd6-d1d9689ac660 nodeName:}" failed. No retries permitted until 2025-12-05 07:03:01.22109679 +0000 UTC m=+1015.290613122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" (UID: "7d2e66f9-622c-4723-abd6-d1d9689ac660") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.361149 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.530496 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.530581 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.530711 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.530756 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs podName:02c9c751-c299-4ff8-9c2d-200aae3ea2ba nodeName:}" failed. No retries permitted until 2025-12-05 07:03:01.530742805 +0000 UTC m=+1015.600259137 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-zslsd" (UID: "02c9c751-c299-4ff8-9c2d-200aae3ea2ba") : secret "webhook-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.531065 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.531090 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs podName:02c9c751-c299-4ff8-9c2d-200aae3ea2ba nodeName:}" failed. No retries permitted until 2025-12-05 07:03:01.531081574 +0000 UTC m=+1015.600597906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-zslsd" (UID: "02c9c751-c299-4ff8-9c2d-200aae3ea2ba") : secret "metrics-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.594130 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd"] Dec 05 07:03:00 crc kubenswrapper[4780]: W1205 07:03:00.598485 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f86d305_a39d_42ec_9a73_067610752615.slice/crio-f77162f6004d342c0369bdba207833cea61bb3940749311a10f121ee12a37d8c WatchSource:0}: Error finding container f77162f6004d342c0369bdba207833cea61bb3940749311a10f121ee12a37d8c: Status 404 returned error can't find the container with id f77162f6004d342c0369bdba207833cea61bb3940749311a10f121ee12a37d8c Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.610975 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f"] Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.635079 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl"] Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.639683 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9"] Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.658413 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r"] Dec 05 07:03:00 crc kubenswrapper[4780]: W1205 07:03:00.667359 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb14f9a1_3b2e_4c17_a750_b1188fff5b40.slice/crio-62ad4af3681aa2439e2f3742e7975db0c9868be7348b4a9ad24117eefcd78ffe WatchSource:0}: Error finding container 62ad4af3681aa2439e2f3742e7975db0c9868be7348b4a9ad24117eefcd78ffe: Status 404 returned error can't find the container with id 62ad4af3681aa2439e2f3742e7975db0c9868be7348b4a9ad24117eefcd78ffe Dec 05 07:03:00 crc kubenswrapper[4780]: W1205 07:03:00.671073 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12b890b6_660a_4a2f_a2cf_2cbce76cafc6.slice/crio-c51a91b6749f5a6d4e90022bff7354c973e49ecc289387980fbe896dd2672cd3 WatchSource:0}: Error finding container c51a91b6749f5a6d4e90022bff7354c973e49ecc289387980fbe896dd2672cd3: Status 404 returned error can't find the container with id c51a91b6749f5a6d4e90022bff7354c973e49ecc289387980fbe896dd2672cd3 Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.765545 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k"] Dec 05 07:03:00 crc kubenswrapper[4780]: W1205 07:03:00.765671 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f1b2ca9_ee98_43a5_8346_dfa6a59d03c9.slice/crio-9c465f72c05d8434e053e0243620909994ec45cf7b22ef731af3e0c5bfe9c356 WatchSource:0}: Error finding container 9c465f72c05d8434e053e0243620909994ec45cf7b22ef731af3e0c5bfe9c356: Status 404 returned error can't find the container with id 9c465f72c05d8434e053e0243620909994ec45cf7b22ef731af3e0c5bfe9c356 Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.800401 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk"] Dec 05 07:03:00 crc kubenswrapper[4780]: W1205 07:03:00.805426 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8450df13_5e1a_4f4e_86dc_b1c841845554.slice/crio-c0805bd74795356b90283f87c059cc1f00fe7a29bd4f29b3d1814d63e56e4453 WatchSource:0}: Error finding container c0805bd74795356b90283f87c059cc1f00fe7a29bd4f29b3d1814d63e56e4453: Status 404 returned error can't find the container with id c0805bd74795356b90283f87c059cc1f00fe7a29bd4f29b3d1814d63e56e4453 Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.813221 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" event={"ID":"2f86d305-a39d-42ec-9a73-067610752615","Type":"ContainerStarted","Data":"f77162f6004d342c0369bdba207833cea61bb3940749311a10f121ee12a37d8c"} Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.815514 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" event={"ID":"d3c6c892-e943-45a9-bda7-63fbae6bc3c1","Type":"ContainerStarted","Data":"2bb7863a49cbc22c23a7e2f855c753a9c87dc73b5514a5425325ed576e533fd7"} Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.816510 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" event={"ID":"fb14f9a1-3b2e-4c17-a750-b1188fff5b40","Type":"ContainerStarted","Data":"62ad4af3681aa2439e2f3742e7975db0c9868be7348b4a9ad24117eefcd78ffe"} Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.823746 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" event={"ID":"8450df13-5e1a-4f4e-86dc-b1c841845554","Type":"ContainerStarted","Data":"c0805bd74795356b90283f87c059cc1f00fe7a29bd4f29b3d1814d63e56e4453"} Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.827414 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" event={"ID":"12b890b6-660a-4a2f-a2cf-2cbce76cafc6","Type":"ContainerStarted","Data":"c51a91b6749f5a6d4e90022bff7354c973e49ecc289387980fbe896dd2672cd3"} Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.828260 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" event={"ID":"d413e91e-0735-412e-8614-bd86a466267b","Type":"ContainerStarted","Data":"92ef23ce2658b3b0837cadec9083eec80a94b0ea3c122d06c12363a88deb4e9f"} Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.829021 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" event={"ID":"e4d45329-536a-48cf-932c-22669f486a7c","Type":"ContainerStarted","Data":"b042c38eebda62a63c4e54bd629faed2e7b69a819d56f85022dbcbca294bef10"} Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.829842 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" event={"ID":"7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9","Type":"ContainerStarted","Data":"9c465f72c05d8434e053e0243620909994ec45cf7b22ef731af3e0c5bfe9c356"} Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.909425 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd"] Dec 05 07:03:00 crc kubenswrapper[4780]: W1205 07:03:00.920963 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc032b9cc_5da5_4011_a397_e564fedcf04d.slice/crio-dbdbb930e41654d0dd7fd668a1e01193bbbec9ebf7e08da25f2ef2b87552da49 WatchSource:0}: Error finding container dbdbb930e41654d0dd7fd668a1e01193bbbec9ebf7e08da25f2ef2b87552da49: Status 404 returned error can't find the container with id dbdbb930e41654d0dd7fd668a1e01193bbbec9ebf7e08da25f2ef2b87552da49 Dec 05 07:03:00 crc kubenswrapper[4780]: W1205 07:03:00.926889 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod972a9e29_9c48_4d8e_9390_e91c9b422af8.slice/crio-9751bc8501eee0cda856476e5cbab28c97cbdb631d3c3a11053608e9c5549ec0 WatchSource:0}: Error finding container 9751bc8501eee0cda856476e5cbab28c97cbdb631d3c3a11053608e9c5549ec0: Status 404 returned error can't find the container with id 9751bc8501eee0cda856476e5cbab28c97cbdb631d3c3a11053608e9c5549ec0 Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.929537 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w"] Dec 05 07:03:00 crc kubenswrapper[4780]: W1205 07:03:00.930681 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f817d18_c711_48ff_891e_f5f59fe1ec5f.slice/crio-89d0ca084abde4822ea34436fbbfe0aa6c60fe67c2f151491d66f5f02a69dc03 WatchSource:0}: Error finding container 89d0ca084abde4822ea34436fbbfe0aa6c60fe67c2f151491d66f5f02a69dc03: Status 404 returned error can't find the container with id 89d0ca084abde4822ea34436fbbfe0aa6c60fe67c2f151491d66f5f02a69dc03 Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.936225 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh"] Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.943120 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2"] Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.943694 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drthd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-5pn44_openstack-operators(b819f602-ddd5-4a16-b998-6d7d78798681): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.948093 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drthd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-5pn44_openstack-operators(b819f602-ddd5-4a16-b998-6d7d78798681): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.949175 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" podUID="b819f602-ddd5-4a16-b998-6d7d78798681" Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.950309 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44"] Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.954303 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw"] Dec 05 07:03:00 crc kubenswrapper[4780]: W1205 07:03:00.965238 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod063c7b0a_5211_4356_9452_e55deeeeb834.slice/crio-7f1b66b7fab9550398aaa68d361a5f9285703cc74ff12fc63595e4ea909ca4f7 WatchSource:0}: Error finding container 7f1b66b7fab9550398aaa68d361a5f9285703cc74ff12fc63595e4ea909ca4f7: Status 404 returned error can't find the container with id 7f1b66b7fab9550398aaa68d361a5f9285703cc74ff12fc63595e4ea909ca4f7 Dec 05 07:03:00 crc kubenswrapper[4780]: I1205 07:03:00.965702 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.965929 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.965976 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert podName:edd42acf-bc82-40b8-bae3-c8ce3f8dcd54 nodeName:}" failed. No retries permitted until 2025-12-05 07:03:02.965960246 +0000 UTC m=+1017.035476578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert") pod "infra-operator-controller-manager-57548d458d-9ntxt" (UID: "edd42acf-bc82-40b8-bae3-c8ce3f8dcd54") : secret "infra-operator-webhook-server-cert" not found Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.967871 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tckkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-7plfw_openstack-operators(063c7b0a-5211-4356-9452-e55deeeeb834): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.969803 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tckkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-7plfw_openstack-operators(063c7b0a-5211-4356-9452-e55deeeeb834): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:00 crc kubenswrapper[4780]: E1205 07:03:00.971051 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" podUID="063c7b0a-5211-4356-9452-e55deeeeb834" Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.155231 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-gsggt"] Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.160269 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v"] Dec 05 07:03:01 crc kubenswrapper[4780]: W1205 07:03:01.167032 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1af7f32d_6c1b_4ed2_8511_4ca770bba111.slice/crio-8600296a5371dffb46928c30839a6faffdbd6c42a0f2778bb1d85108064d9b69 WatchSource:0}: Error finding container 8600296a5371dffb46928c30839a6faffdbd6c42a0f2778bb1d85108064d9b69: Status 404 returned error can't find the container with id 8600296a5371dffb46928c30839a6faffdbd6c42a0f2778bb1d85108064d9b69 Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.176064 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn"] Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.177214 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwg46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xmn7v_openstack-operators(908cf347-4346-4eb1-996f-b214491207e0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:01 crc kubenswrapper[4780]: W1205 07:03:01.178048 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod891b32d7_a00e_4aee_b1a9_11a17e231cf1.slice/crio-7cb446c254bfedb73d74387a5a2c1b13dc46edacd01843161c8cc41130f18cb8 WatchSource:0}: Error finding container 7cb446c254bfedb73d74387a5a2c1b13dc46edacd01843161c8cc41130f18cb8: Status 404 returned error can't find the container with id 7cb446c254bfedb73d74387a5a2c1b13dc46edacd01843161c8cc41130f18cb8 Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.178620 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" podUID="908cf347-4346-4eb1-996f-b214491207e0" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.180322 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r4697,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-xbdxn_openstack-operators(891b32d7-a00e-4aee-b1a9-11a17e231cf1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.182102 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r4697,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-xbdxn_openstack-operators(891b32d7-a00e-4aee-b1a9-11a17e231cf1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.183570 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" podUID="891b32d7-a00e-4aee-b1a9-11a17e231cf1" Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.184485 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8"] Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.191859 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg"] Dec 05 07:03:01 crc kubenswrapper[4780]: W1205 07:03:01.192216 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0504ce62_63ec_4224_883e_495a8de219a6.slice/crio-51fbdce9d8c772c9aa7a3ee1b7ef6c35cb5b21bfa80a9ff28c0ae751875aa62a WatchSource:0}: Error finding container 51fbdce9d8c772c9aa7a3ee1b7ef6c35cb5b21bfa80a9ff28c0ae751875aa62a: Status 404 returned error can't find the container with id 51fbdce9d8c772c9aa7a3ee1b7ef6c35cb5b21bfa80a9ff28c0ae751875aa62a Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.192529 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhd77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-sl2j8_openstack-operators(b2f9a2dc-4b04-4209-a427-1467873d3d19): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:01 crc kubenswrapper[4780]: W1205 07:03:01.193605 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bf061f5_1016_4813_aad6_b50350f6a1c5.slice/crio-04c71b341a9ea37f073252c5e536b7f1e2c9afbfb91038369a6655fec000db9f WatchSource:0}: Error finding container 04c71b341a9ea37f073252c5e536b7f1e2c9afbfb91038369a6655fec000db9f: Status 404 returned error can't find the container with id 04c71b341a9ea37f073252c5e536b7f1e2c9afbfb91038369a6655fec000db9f Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.194819 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhd77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-sl2j8_openstack-operators(b2f9a2dc-4b04-4209-a427-1467873d3d19): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.195936 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" podUID="b2f9a2dc-4b04-4209-a427-1467873d3d19" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.196341 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8tqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-r5j4j_openstack-operators(0504ce62-63ec-4224-883e-495a8de219a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.198033 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z8tqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-r5j4j_openstack-operators(0504ce62-63ec-4224-883e-495a8de219a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.198498 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nrfn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-dr6pg_openstack-operators(9bf061f5-1016-4813-aad6-b50350f6a1c5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.199759 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" podUID="0504ce62-63ec-4224-883e-495a8de219a6" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.201022 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nrfn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-dr6pg_openstack-operators(9bf061f5-1016-4813-aad6-b50350f6a1c5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.201445 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j"] Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.202371 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" podUID="9bf061f5-1016-4813-aad6-b50350f6a1c5" Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.281332 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.281666 4780 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.281745 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert podName:7d2e66f9-622c-4723-abd6-d1d9689ac660 nodeName:}" failed. No retries permitted until 2025-12-05 07:03:03.281705931 +0000 UTC m=+1017.351222263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" (UID: "7d2e66f9-622c-4723-abd6-d1d9689ac660") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.585464 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.585583 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.585815 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.585905 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs podName:02c9c751-c299-4ff8-9c2d-200aae3ea2ba nodeName:}" failed. No retries permitted until 2025-12-05 07:03:03.585868373 +0000 UTC m=+1017.655384705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-zslsd" (UID: "02c9c751-c299-4ff8-9c2d-200aae3ea2ba") : secret "metrics-server-cert" not found Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.585908 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.586046 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs podName:02c9c751-c299-4ff8-9c2d-200aae3ea2ba nodeName:}" failed. No retries permitted until 2025-12-05 07:03:03.586026588 +0000 UTC m=+1017.655542920 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-zslsd" (UID: "02c9c751-c299-4ff8-9c2d-200aae3ea2ba") : secret "webhook-server-cert" not found Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.842343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" event={"ID":"0504ce62-63ec-4224-883e-495a8de219a6","Type":"ContainerStarted","Data":"51fbdce9d8c772c9aa7a3ee1b7ef6c35cb5b21bfa80a9ff28c0ae751875aa62a"} Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.843673 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" event={"ID":"b2f9a2dc-4b04-4209-a427-1467873d3d19","Type":"ContainerStarted","Data":"962fd05f51681c39d3ab50dddc01e706716ac6b4f47563504ae6cbfe3e9dcad3"} Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.845813 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" podUID="b2f9a2dc-4b04-4209-a427-1467873d3d19" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.846178 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" podUID="0504ce62-63ec-4224-883e-495a8de219a6" Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.846691 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" event={"ID":"908cf347-4346-4eb1-996f-b214491207e0","Type":"ContainerStarted","Data":"c8947a50d998e7955a4ec74daffff66a15c6455a41f2078aec70435c4059f65a"} Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.847746 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" podUID="908cf347-4346-4eb1-996f-b214491207e0" Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.848717 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" event={"ID":"4f817d18-c711-48ff-891e-f5f59fe1ec5f","Type":"ContainerStarted","Data":"89d0ca084abde4822ea34436fbbfe0aa6c60fe67c2f151491d66f5f02a69dc03"} Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.852262 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" event={"ID":"9bf061f5-1016-4813-aad6-b50350f6a1c5","Type":"ContainerStarted","Data":"04c71b341a9ea37f073252c5e536b7f1e2c9afbfb91038369a6655fec000db9f"} Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.854695 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" event={"ID":"b819f602-ddd5-4a16-b998-6d7d78798681","Type":"ContainerStarted","Data":"79faed056d4b6c95f174e940004ff196b0f23db5f7e8aec0d1d9a286b3f4622d"} Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.855828 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" event={"ID":"1f3fb0ee-0381-48df-91b7-1a72bf5acd62","Type":"ContainerStarted","Data":"b1fd61ef993d903bbb9d4c390f2b281a2acdbf8461d590bd138bd228a904c155"} Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.862099 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" podUID="b819f602-ddd5-4a16-b998-6d7d78798681" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.862976 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" podUID="9bf061f5-1016-4813-aad6-b50350f6a1c5" Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.864014 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" event={"ID":"972a9e29-9c48-4d8e-9390-e91c9b422af8","Type":"ContainerStarted","Data":"9751bc8501eee0cda856476e5cbab28c97cbdb631d3c3a11053608e9c5549ec0"} Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.867992 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" event={"ID":"1af7f32d-6c1b-4ed2-8511-4ca770bba111","Type":"ContainerStarted","Data":"8600296a5371dffb46928c30839a6faffdbd6c42a0f2778bb1d85108064d9b69"} Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.869496 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" event={"ID":"c032b9cc-5da5-4011-a397-e564fedcf04d","Type":"ContainerStarted","Data":"dbdbb930e41654d0dd7fd668a1e01193bbbec9ebf7e08da25f2ef2b87552da49"} Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.872414 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" event={"ID":"063c7b0a-5211-4356-9452-e55deeeeb834","Type":"ContainerStarted","Data":"7f1b66b7fab9550398aaa68d361a5f9285703cc74ff12fc63595e4ea909ca4f7"} Dec 05 07:03:01 crc kubenswrapper[4780]: I1205 07:03:01.882526 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" event={"ID":"891b32d7-a00e-4aee-b1a9-11a17e231cf1","Type":"ContainerStarted","Data":"7cb446c254bfedb73d74387a5a2c1b13dc46edacd01843161c8cc41130f18cb8"} Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.882750 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" podUID="063c7b0a-5211-4356-9452-e55deeeeb834" Dec 05 07:03:01 crc kubenswrapper[4780]: E1205 07:03:01.892298 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" podUID="891b32d7-a00e-4aee-b1a9-11a17e231cf1" Dec 05 07:03:02 crc kubenswrapper[4780]: E1205 07:03:02.901406 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" podUID="891b32d7-a00e-4aee-b1a9-11a17e231cf1" Dec 05 07:03:02 crc kubenswrapper[4780]: E1205 07:03:02.901629 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" podUID="0504ce62-63ec-4224-883e-495a8de219a6" Dec 05 07:03:02 crc kubenswrapper[4780]: E1205 07:03:02.901707 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" podUID="908cf347-4346-4eb1-996f-b214491207e0" Dec 05 07:03:02 crc kubenswrapper[4780]: E1205 07:03:02.902184 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" podUID="b2f9a2dc-4b04-4209-a427-1467873d3d19" Dec 05 07:03:02 crc kubenswrapper[4780]: E1205 07:03:02.902242 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" podUID="9bf061f5-1016-4813-aad6-b50350f6a1c5" Dec 05 07:03:02 crc kubenswrapper[4780]: E1205 07:03:02.902292 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" podUID="063c7b0a-5211-4356-9452-e55deeeeb834" Dec 05 07:03:02 crc kubenswrapper[4780]: E1205 07:03:02.904089 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" podUID="b819f602-ddd5-4a16-b998-6d7d78798681" Dec 05 07:03:03 crc kubenswrapper[4780]: I1205 07:03:03.027124 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:03:03 crc kubenswrapper[4780]: E1205 07:03:03.027648 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 07:03:03 crc kubenswrapper[4780]: E1205 07:03:03.027908 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert podName:edd42acf-bc82-40b8-bae3-c8ce3f8dcd54 nodeName:}" failed. No retries permitted until 2025-12-05 07:03:07.027799582 +0000 UTC m=+1021.097315914 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert") pod "infra-operator-controller-manager-57548d458d-9ntxt" (UID: "edd42acf-bc82-40b8-bae3-c8ce3f8dcd54") : secret "infra-operator-webhook-server-cert" not found Dec 05 07:03:03 crc kubenswrapper[4780]: I1205 07:03:03.331644 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:03:03 crc kubenswrapper[4780]: E1205 07:03:03.331844 4780 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 07:03:03 crc kubenswrapper[4780]: E1205 07:03:03.331905 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert podName:7d2e66f9-622c-4723-abd6-d1d9689ac660 nodeName:}" failed. No retries permitted until 2025-12-05 07:03:07.331891862 +0000 UTC m=+1021.401408194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" (UID: "7d2e66f9-622c-4723-abd6-d1d9689ac660") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 07:03:03 crc kubenswrapper[4780]: I1205 07:03:03.634689 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:03 crc kubenswrapper[4780]: I1205 07:03:03.634819 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:03 crc kubenswrapper[4780]: E1205 07:03:03.634954 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 07:03:03 crc kubenswrapper[4780]: E1205 07:03:03.634980 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 07:03:03 crc kubenswrapper[4780]: E1205 07:03:03.635057 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs podName:02c9c751-c299-4ff8-9c2d-200aae3ea2ba nodeName:}" failed. No retries permitted until 2025-12-05 07:03:07.635034027 +0000 UTC m=+1021.704550399 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-zslsd" (UID: "02c9c751-c299-4ff8-9c2d-200aae3ea2ba") : secret "metrics-server-cert" not found Dec 05 07:03:03 crc kubenswrapper[4780]: E1205 07:03:03.635075 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs podName:02c9c751-c299-4ff8-9c2d-200aae3ea2ba nodeName:}" failed. No retries permitted until 2025-12-05 07:03:07.635067408 +0000 UTC m=+1021.704583830 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-zslsd" (UID: "02c9c751-c299-4ff8-9c2d-200aae3ea2ba") : secret "webhook-server-cert" not found Dec 05 07:03:07 crc kubenswrapper[4780]: I1205 07:03:07.086530 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:03:07 crc kubenswrapper[4780]: E1205 07:03:07.086692 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 07:03:07 crc kubenswrapper[4780]: E1205 07:03:07.086948 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert podName:edd42acf-bc82-40b8-bae3-c8ce3f8dcd54 nodeName:}" failed. No retries permitted until 2025-12-05 07:03:15.08692926 +0000 UTC m=+1029.156445592 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert") pod "infra-operator-controller-manager-57548d458d-9ntxt" (UID: "edd42acf-bc82-40b8-bae3-c8ce3f8dcd54") : secret "infra-operator-webhook-server-cert" not found Dec 05 07:03:07 crc kubenswrapper[4780]: I1205 07:03:07.390998 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:03:07 crc kubenswrapper[4780]: E1205 07:03:07.391160 4780 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 07:03:07 crc kubenswrapper[4780]: E1205 07:03:07.391224 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert podName:7d2e66f9-622c-4723-abd6-d1d9689ac660 nodeName:}" failed. No retries permitted until 2025-12-05 07:03:15.391205386 +0000 UTC m=+1029.460721718 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" (UID: "7d2e66f9-622c-4723-abd6-d1d9689ac660") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 07:03:07 crc kubenswrapper[4780]: I1205 07:03:07.694660 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:07 crc kubenswrapper[4780]: I1205 07:03:07.694751 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:07 crc kubenswrapper[4780]: E1205 07:03:07.694897 4780 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 07:03:07 crc kubenswrapper[4780]: E1205 07:03:07.694948 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs podName:02c9c751-c299-4ff8-9c2d-200aae3ea2ba nodeName:}" failed. No retries permitted until 2025-12-05 07:03:15.694934466 +0000 UTC m=+1029.764450798 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-zslsd" (UID: "02c9c751-c299-4ff8-9c2d-200aae3ea2ba") : secret "metrics-server-cert" not found Dec 05 07:03:07 crc kubenswrapper[4780]: E1205 07:03:07.695231 4780 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 07:03:07 crc kubenswrapper[4780]: E1205 07:03:07.695412 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs podName:02c9c751-c299-4ff8-9c2d-200aae3ea2ba nodeName:}" failed. No retries permitted until 2025-12-05 07:03:15.695394028 +0000 UTC m=+1029.764910360 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-zslsd" (UID: "02c9c751-c299-4ff8-9c2d-200aae3ea2ba") : secret "webhook-server-cert" not found Dec 05 07:03:13 crc kubenswrapper[4780]: E1205 07:03:13.491539 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wfzml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-4vfh9_openstack-operators(d3c6c892-e943-45a9-bda7-63fbae6bc3c1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:13 crc kubenswrapper[4780]: E1205 07:03:13.493430 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" podUID="d3c6c892-e943-45a9-bda7-63fbae6bc3c1" Dec 05 07:03:13 crc kubenswrapper[4780]: E1205 07:03:13.501899 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g52lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-kf66r_openstack-operators(12b890b6-660a-4a2f-a2cf-2cbce76cafc6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:13 crc kubenswrapper[4780]: E1205 07:03:13.503998 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" podUID="12b890b6-660a-4a2f-a2cf-2cbce76cafc6" Dec 05 07:03:13 crc kubenswrapper[4780]: E1205 07:03:13.513501 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7n7d5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-2nh4k_openstack-operators(7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 07:03:13 crc kubenswrapper[4780]: E1205 07:03:13.514951 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" podUID="7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9" Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.017394 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" event={"ID":"1af7f32d-6c1b-4ed2-8511-4ca770bba111","Type":"ContainerStarted","Data":"0d5c5e41db04fb83123a27b427d2839c39a04f6747f0849c9d4c6e9312ed4cdf"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.029447 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" event={"ID":"c032b9cc-5da5-4011-a397-e564fedcf04d","Type":"ContainerStarted","Data":"1397928106cfa7179812b45722b9931e638ff802648b46a367fd886706e62bb8"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.040463 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" event={"ID":"8450df13-5e1a-4f4e-86dc-b1c841845554","Type":"ContainerStarted","Data":"274da6e5e57865517043d232b949501dcd87d155f077ea1e904cc367883ccfaa"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.050852 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" event={"ID":"d3c6c892-e943-45a9-bda7-63fbae6bc3c1","Type":"ContainerStarted","Data":"c8ff2353c29487cfe1e7ae1e8089ff96f1196f71e0f1e811ec63a2ea1819778c"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.051605 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" Dec 05 07:03:14 crc kubenswrapper[4780]: E1205 07:03:14.053059 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" podUID="d3c6c892-e943-45a9-bda7-63fbae6bc3c1" Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.055054 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" event={"ID":"fb14f9a1-3b2e-4c17-a750-b1188fff5b40","Type":"ContainerStarted","Data":"8314d204ba557c96e0983cc6414c0df0377c67e3dfe28af08f85e918245dc691"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.059179 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" event={"ID":"12b890b6-660a-4a2f-a2cf-2cbce76cafc6","Type":"ContainerStarted","Data":"a47baa27c9d4fd00bdc090e53ee9feed8976afe11b9eaa0f6fa9a94e31f079b5"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.059386 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" Dec 05 07:03:14 crc kubenswrapper[4780]: E1205 07:03:14.061868 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" podUID="12b890b6-660a-4a2f-a2cf-2cbce76cafc6" Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.072784 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" event={"ID":"7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9","Type":"ContainerStarted","Data":"432c3fe39a5f902d3bcc84a61908e887abbfdc99baa7d3d8637c9b93459152f4"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.072835 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" Dec 05 07:03:14 crc kubenswrapper[4780]: E1205 07:03:14.089131 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" podUID="7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9" Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.099137 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" event={"ID":"1f3fb0ee-0381-48df-91b7-1a72bf5acd62","Type":"ContainerStarted","Data":"6c3866c9625bdc6f74d8fb2995064a1fb782abe5840cc9a8a8356c580a436ab8"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.117340 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" event={"ID":"d413e91e-0735-412e-8614-bd86a466267b","Type":"ContainerStarted","Data":"0aa8f885ca8b23bef6e9a71ccd27c6b45584b3e09dccf34ea0dfd23a90337306"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.126484 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" event={"ID":"e4d45329-536a-48cf-932c-22669f486a7c","Type":"ContainerStarted","Data":"06821971080a3721d478f4cc67656ec0c63ef238adfc673a8810310de739de35"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.132043 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" event={"ID":"972a9e29-9c48-4d8e-9390-e91c9b422af8","Type":"ContainerStarted","Data":"066e71b67a18c301d098dfec69d6277ec8613524ba04c4ed761e6c153911bf9d"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.156239 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" event={"ID":"2f86d305-a39d-42ec-9a73-067610752615","Type":"ContainerStarted","Data":"974f885710424c3a0fbb0dd3bf13772f7324e0f4988a45f961d8e1cfbadeb0e3"} Dec 05 07:03:14 crc kubenswrapper[4780]: I1205 07:03:14.159519 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" event={"ID":"4f817d18-c711-48ff-891e-f5f59fe1ec5f","Type":"ContainerStarted","Data":"f91cf1b6483e835e53dbf73b6d0a754a9835e2d84b92320d4c188bf97c735518"} Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.106403 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:03:15 crc kubenswrapper[4780]: E1205 07:03:15.106558 4780 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 07:03:15 crc kubenswrapper[4780]: E1205 07:03:15.106606 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert podName:edd42acf-bc82-40b8-bae3-c8ce3f8dcd54 nodeName:}" failed. No retries permitted until 2025-12-05 07:03:31.106590141 +0000 UTC m=+1045.176106473 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert") pod "infra-operator-controller-manager-57548d458d-9ntxt" (UID: "edd42acf-bc82-40b8-bae3-c8ce3f8dcd54") : secret "infra-operator-webhook-server-cert" not found Dec 05 07:03:15 crc kubenswrapper[4780]: E1205 07:03:15.183382 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" podUID="7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9" Dec 05 07:03:15 crc kubenswrapper[4780]: E1205 07:03:15.183751 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" podUID="d3c6c892-e943-45a9-bda7-63fbae6bc3c1" Dec 05 07:03:15 crc kubenswrapper[4780]: E1205 07:03:15.183825 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" podUID="12b890b6-660a-4a2f-a2cf-2cbce76cafc6" Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.410058 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.418620 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d2e66f9-622c-4723-abd6-d1d9689ac660-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5mb6tl\" (UID: \"7d2e66f9-622c-4723-abd6-d1d9689ac660\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.420789 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tqrw6" Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.429605 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.717326 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.717424 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.724620 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.736836 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02c9c751-c299-4ff8-9c2d-200aae3ea2ba-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-zslsd\" (UID: \"02c9c751-c299-4ff8-9c2d-200aae3ea2ba\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.941372 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-djxvt" Dec 05 07:03:15 crc kubenswrapper[4780]: I1205 07:03:15.950391 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:17 crc kubenswrapper[4780]: I1205 07:03:17.299488 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd"] Dec 05 07:03:17 crc kubenswrapper[4780]: I1205 07:03:17.560443 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl"] Dec 05 07:03:17 crc kubenswrapper[4780]: W1205 07:03:17.879816 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d2e66f9_622c_4723_abd6_d1d9689ac660.slice/crio-45c21a94bceb02eb178b2be1bfbb3aeaf479a23c7f4f0e9d9192a8d203c8127a WatchSource:0}: Error finding container 45c21a94bceb02eb178b2be1bfbb3aeaf479a23c7f4f0e9d9192a8d203c8127a: Status 404 returned error can't find the container with id 45c21a94bceb02eb178b2be1bfbb3aeaf479a23c7f4f0e9d9192a8d203c8127a Dec 05 07:03:17 crc kubenswrapper[4780]: W1205 07:03:17.882233 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02c9c751_c299_4ff8_9c2d_200aae3ea2ba.slice/crio-4efba295446c6d13b794f9891d9144de8019342a27b1ad7e40db1c7b59ff97ba WatchSource:0}: Error finding container 4efba295446c6d13b794f9891d9144de8019342a27b1ad7e40db1c7b59ff97ba: Status 404 returned error can't find the container with id 4efba295446c6d13b794f9891d9144de8019342a27b1ad7e40db1c7b59ff97ba Dec 05 07:03:18 crc kubenswrapper[4780]: I1205 07:03:18.203516 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" event={"ID":"7d2e66f9-622c-4723-abd6-d1d9689ac660","Type":"ContainerStarted","Data":"45c21a94bceb02eb178b2be1bfbb3aeaf479a23c7f4f0e9d9192a8d203c8127a"} Dec 05 07:03:18 crc kubenswrapper[4780]: I1205 07:03:18.204969 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" event={"ID":"02c9c751-c299-4ff8-9c2d-200aae3ea2ba","Type":"ContainerStarted","Data":"4c061196224b36198b1d18881a96c87dbf02be138e4bb84ab3e6bf9a49245b83"} Dec 05 07:03:18 crc kubenswrapper[4780]: I1205 07:03:18.204991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" event={"ID":"02c9c751-c299-4ff8-9c2d-200aae3ea2ba","Type":"ContainerStarted","Data":"4efba295446c6d13b794f9891d9144de8019342a27b1ad7e40db1c7b59ff97ba"} Dec 05 07:03:18 crc kubenswrapper[4780]: I1205 07:03:18.205159 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:18 crc kubenswrapper[4780]: I1205 07:03:18.228692 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" podStartSLOduration=19.22867556 podStartE2EDuration="19.22867556s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:03:18.22640196 +0000 UTC m=+1032.295918292" watchObservedRunningTime="2025-12-05 07:03:18.22867556 +0000 UTC m=+1032.298191892" Dec 05 07:03:19 crc kubenswrapper[4780]: I1205 07:03:19.341210 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" Dec 05 07:03:19 crc kubenswrapper[4780]: E1205 07:03:19.343507 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" podUID="12b890b6-660a-4a2f-a2cf-2cbce76cafc6" Dec 05 07:03:19 crc kubenswrapper[4780]: I1205 07:03:19.397723 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" Dec 05 07:03:19 crc kubenswrapper[4780]: E1205 07:03:19.400267 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" podUID="d3c6c892-e943-45a9-bda7-63fbae6bc3c1" Dec 05 07:03:19 crc kubenswrapper[4780]: I1205 07:03:19.532769 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" Dec 05 07:03:19 crc kubenswrapper[4780]: E1205 07:03:19.535208 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" podUID="7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9" Dec 05 07:03:25 crc kubenswrapper[4780]: I1205 07:03:25.957505 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-zslsd" Dec 05 07:03:29 crc kubenswrapper[4780]: I1205 07:03:29.908059 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:03:29 crc kubenswrapper[4780]: I1205 07:03:29.908622 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:03:29 crc kubenswrapper[4780]: I1205 07:03:29.908665 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:03:29 crc kubenswrapper[4780]: I1205 07:03:29.909284 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cffc0fbafe881f6d1cc6fb53dc07f8d5a3aeb1ce491c38fa67b6155bb864e41"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:03:29 crc kubenswrapper[4780]: I1205 07:03:29.909336 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://2cffc0fbafe881f6d1cc6fb53dc07f8d5a3aeb1ce491c38fa67b6155bb864e41" gracePeriod=600 Dec 05 07:03:31 crc kubenswrapper[4780]: I1205 07:03:31.156551 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:03:31 crc kubenswrapper[4780]: I1205 07:03:31.162416 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edd42acf-bc82-40b8-bae3-c8ce3f8dcd54-cert\") pod \"infra-operator-controller-manager-57548d458d-9ntxt\" (UID: \"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:03:31 crc kubenswrapper[4780]: I1205 07:03:31.272427 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-h65jc" Dec 05 07:03:31 crc kubenswrapper[4780]: I1205 07:03:31.280753 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:03:31 crc kubenswrapper[4780]: I1205 07:03:31.317056 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="2cffc0fbafe881f6d1cc6fb53dc07f8d5a3aeb1ce491c38fa67b6155bb864e41" exitCode=0 Dec 05 07:03:31 crc kubenswrapper[4780]: I1205 07:03:31.317108 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"2cffc0fbafe881f6d1cc6fb53dc07f8d5a3aeb1ce491c38fa67b6155bb864e41"} Dec 05 07:03:31 crc kubenswrapper[4780]: I1205 07:03:31.317147 4780 scope.go:117] "RemoveContainer" containerID="2807931e09acf8b42ad9790918ef7a86372682995b57dd8fe1ed2240e7e7343f" Dec 05 07:03:44 crc kubenswrapper[4780]: E1205 07:03:44.702411 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" Dec 05 07:03:44 crc kubenswrapper[4780]: E1205 07:03:44.703584 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:51004bad441b97668eff122dd7b0cc5bdedfa185ba4d7533d9ff84d5ee9d51e2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:2444fe898df68969a7978bb84fd12c3c61dc371f264156ff0a877d8aab1f9f4e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:2d87021f2f291525dda4c17e8fcd2fbef60780450d7941be423bcfd4047cabd2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:3473a5f5c914f9ba397ffc5ea9d8eeedd85d31a3c9244df7457f3c3e74eaefc4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:c1c8f583529e123a7105ebc2249ab19267313f30138867840d1e65b9390f1886,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:8dcd62d8f75c4dbf0afc27fa96cd481c56d8fb174fa29abafa0d29616eded790,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:82006b9c64d4c5f80483cda262d960ce6be4813665158ef1a53ea7734bbe431f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:a76d2c46403c03704dcfe7de49454496300d60d849ee81076d8637b272043c69,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:d2fbe075d21195b746fd27a073dbd249d38b3c4f81c30d162770a338fb87e338,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:43a24796dabde68270dbfefa107205e173fdd6a0dc701502858cadbede69da31,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:b2785dbc3ceaa930dff8068bbb8654af2e0b40a9c2632300641cb8348e9cf43d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:f17b61f2318b74648e174d73dd31deee6c0d1434605c9f32707aedf2f4378957,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:7a2056615520e272bae43ec3f34e2ba7a92c1d364b8d9106b53bd694619fc9c2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:0b08861590e3646584af0fc7c7d8a743a35b4f5964d6fd355f206daa9ae999ce,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:e26fb8ad7808ca8efe268881f9229df90a755b24bd4ad5501ba3b8c5c16987a3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:cfeb4e264c00408dee5196b06003722b6dda540a3f26d3ff90abfd795729833b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:e02c97e990781e27d0bc5319781ee19618cdb2997adea3df57376cbda9896b55,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:1fe03701929d2f30e832a3831c87d8046806e2c35545aebe94f4a2849b1f8e67,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:2f9748f10c87efbee801c70f46b3dc5c6532ca070af558a4fb45cb34dbbb6f04,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:1d69ad383cb03ef808c1f737427c5ca2385e28a3af1861a4336b6e539b346c27,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:112fed4b9de0ccf15011e8a3a26ce6efbbe8e7d8eb3d4153d1a1874b9bde6d68,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:aa87158aeb1194f4940126197b912ea972fafe12ea5c1f89a07d6ccfafc16f77,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:fcd3bf8112793023be72845ce3a984beabd5a3cb369c11252130076ed38b3770,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:fee9fc72864ee217aace1cf11cb090ef41935841f9c60127d775dc2989330777,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:239967aef48587f275c9636d8f89e476d909dbba57fea64d8196ddacf6817450,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:7a0ade11985653bb8ad2646b0848eb6f7128d21d85b99551ac17f74293087a30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:8ab175d7ee42e22e0ca1ebf98d180112428758a86ef8adccaba8f3653567f6ab,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:d7e43361d50d1e7d4c99e499eee56aa50591855836638742666303dc59096258,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:f3227beee5b52411de42c6a37ceda7d8f68934b4671a2d661403f8c1c0eab6d6,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:e6dfe5f67adec298afbb57aec95c9cf89b4757ccfea8d8be66ef0ffd8c58322f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:ba46c29c79c92487b6b9f0db11a517269c6455b8b9786e9d2692f4e24e43d552,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:4f1c6fcf33354f1cbbc914c1709310be2fa4fe0dd64e5dbf3f91d6f0634bd28f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:481073ac9deefb38bbd989aaa8dd7aedb4e0af26017f4883f85fce433380bf63,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:ecf469bd360c2aa2e5eb57826585c19a10ebe9f683790803dc4989a46c11789e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:d506b2ca02a16cdab757b38a86d40e0459094c7269067de89beb3edf4a50bf5e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:2f2aabcd1b45f9fb3034d28e9a49acac72d7917fd1bbfbbc498e69e8be0b7b2b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:4f4b273dd4c6ead9bf640fb985d101a7c9adba388968fb1d71fbb08b0510eb9f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:dad2336390cae6705133deefaa09c9e39512cf29133aa009006e3962c8022108,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:21edb042683b37827463124ceb159fa316e8cf0ac6040dc464f5242300b9daad,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:21334e97e6b4194d803a60d0ecfa33327bf248e7507683ea9dcb33a28a2ec858,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:4deb460a113324762b3139301c6aacd48c57204d8d13eb1c387d7064ec19db0d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:942f9cbe36d328caa5d68b398703b2be5d7b7dc2b034a72d2ae62416cb7be208,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:9e2ae3ac44ed2495b0f4398d7419b1e8e1321bec32a0ab043aabf28aa8b33384,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:7cb9e377fa81bbe84fcc006b27c45d56ea3d6ed2144fb9ebf5fb8df5b920d423,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:9d930c44b5d90b140117dd05d976d10d29d93eed9a70118e594e00da64594562,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:a7b6fa2f16a882674624b48939737e2bd95da7bef60db593a8e6e4d397fa516c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:68714e821f8e4e2d905d6e5bc7fb2e713a24c02db48901fb2a11d57b80f6c584,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:4a8b11fbc23e097869f8f347e78a409b294573732987dd8fa6493888a3ff68d2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:57007fab45f2d8fbf929d26609a2e566fbcb006e05d78ca72b9d0b71af866305,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:b0f8d8a4d29d8d4667205df4a94bacefcdd7a33981407c20bd7dd320f27308b7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:255cc3471ee112b17da164148b0ec25678332061b5b488868b81a30e5afb5bb5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:a96d336d231eee461559cfe82b025874ce2b8652520297bc5143559694ebac58,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:eaf80338dc065eb9c8c1f40552793c7cc2ff052c88c789f0a5d3e34099549adb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:98a3cff4a3aae37148c4c982a0e37f21a476528cbd74734f59ae22f61fdb6fc1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:a7089bcd0a2dbc014b29391dbd14b3fbc3ba0abd0f36bd16cb3b594cfa001464,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:36cc3ee813bccbfb639f17896bd98028521e3cc5740a5d07f91e119729a76a69,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:61807c42b6197326d9483d65972029117cea6d373ae913fd359993d8e12fff13,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:61dbee4a2559eda45dadf8d2b121cd85f79043d7cb2c1a62f176261042c3e39c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:c4652e3a9c4275470c3ef1a2e4d20a420d9c7bdd5157b0bbdaafea3fa038dcab,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:c1b8da8298ec8be0ca22c7d8ba48da103e72dfe7ed5e9427b971d31eac3a8b33,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:854a802357b4f565a366fce3bf29b20c1b768ec4ab7e822ef52dfc2fef000d2c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:dec5870172c510ae43ff98398260fe595288af59302709d71fc2a020763deb88,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:1e53a53dfe9b3cb757e4d666e76c8989941eb4f0b98d629a7f697a1693aacb17,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:726a3df0e94cfdcef301fe88fa8d91972914ec2104fb6fa1d8e4c325981712a6,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:c8e13f116261ef06b59e9034c605f68d53eb6f760426c35ee6ed3785b97b1800,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:e554a5816081a60a0ae6fd1464c1f0a11cf2133707a4b220a023ecae7b302eed,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:f24234939afca841e46ea4d17bec959b63705ab0e75476465e777d44905c5f1b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:4eb3a9c95f57df34ab88b952d8ad2057d60ac0aa4526a51070bea5d64e3aeeee,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:ae1279cd0af8af3863925d149db4c514dfda0c159a8084216b7228a35f238678,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:fcb1f8a778d8cffa0f42efdcbde01061cb3aaaccc3453e65a4b213d553ad344c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:3c89899d53b3bca91830c259434c074f27554824a9cdcf117158c4a4329810f5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:a1a7ba434daff518f09d8f4075e76308402e9b7a0b5b641ac2ef721fbf88752a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:1ecb6e1be330877bf6dce091efe512045926c0dcb73b67615374ddf5c90adaee,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:78d97df08b9931d90a2523fc4c1d670bdcd5480a6edf96a0d867565f3a6ab78f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:6811871583a498f416300c9a5a2116907f428dbb3530c736c2243d8b6dec2bda,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqfqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-55c85496f5mb6tl_openstack-operators(7d2e66f9-622c-4723-abd6-d1d9689ac660): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:44 crc kubenswrapper[4780]: I1205 07:03:44.706526 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 07:03:44 crc kubenswrapper[4780]: E1205 07:03:44.732820 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 05 07:03:44 crc kubenswrapper[4780]: E1205 07:03:44.733285 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tckkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-7plfw_openstack-operators(063c7b0a-5211-4356-9452-e55deeeeb834): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.087361 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.087541 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vwg46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xmn7v_openstack-operators(908cf347-4346-4eb1-996f-b214491207e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.088746 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" podUID="908cf347-4346-4eb1-996f-b214491207e0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.473580 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.473827 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drthd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-5pn44_openstack-operators(b819f602-ddd5-4a16-b998-6d7d78798681): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.857938 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.858383 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nj8hf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-vjqff_openstack-operators(e4d45329-536a-48cf-932c-22669f486a7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.860865 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" podUID="e4d45329-536a-48cf-932c-22669f486a7c" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.866184 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.866328 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bpthf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-7f7qk_openstack-operators(8450df13-5e1a-4f4e-86dc-b1c841845554): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.867823 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" podUID="8450df13-5e1a-4f4e-86dc-b1c841845554" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.874867 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.875033 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zh59w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-gsggt_openstack-operators(1af7f32d-6c1b-4ed2-8511-4ca770bba111): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.876237 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" podUID="1af7f32d-6c1b-4ed2-8511-4ca770bba111" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.887133 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.887286 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttmtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-n7pzh_openstack-operators(4f817d18-c711-48ff-891e-f5f59fe1ec5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.888441 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" podUID="4f817d18-c711-48ff-891e-f5f59fe1ec5f" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.889354 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.889468 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g52q5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-jfzgd_openstack-operators(c032b9cc-5da5-4011-a397-e564fedcf04d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.890732 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" podUID="c032b9cc-5da5-4011-a397-e564fedcf04d" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.904203 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.904382 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-66xpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-7fj5f_openstack-operators(d413e91e-0735-412e-8614-bd86a466267b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.905919 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" podUID="d413e91e-0735-412e-8614-bd86a466267b" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.907684 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.907826 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8sg44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-pw8g2_openstack-operators(1f3fb0ee-0381-48df-91b7-1a72bf5acd62): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.909268 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" podUID="1f3fb0ee-0381-48df-91b7-1a72bf5acd62" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.921251 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.921435 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2j78z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-6j9rd_openstack-operators(2f86d305-a39d-42ec-9a73-067610752615): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.922843 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" podUID="2f86d305-a39d-42ec-9a73-067610752615" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.958622 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.959092 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6h66n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-mmn9w_openstack-operators(972a9e29-9c48-4d8e-9390-e91c9b422af8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.960564 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" podUID="972a9e29-9c48-4d8e-9390-e91c9b422af8" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.987405 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.987612 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gqr6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-9zlhl_openstack-operators(fb14f9a1-3b2e-4c17-a750-b1188fff5b40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:03:45 crc kubenswrapper[4780]: E1205 07:03:45.989950 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" podUID="fb14f9a1-3b2e-4c17-a750-b1188fff5b40" Dec 05 07:03:46 crc kubenswrapper[4780]: E1205 07:03:46.326647 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" podUID="b819f602-ddd5-4a16-b998-6d7d78798681" Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.433075 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt"] Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.435358 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" event={"ID":"7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9","Type":"ContainerStarted","Data":"ea725b19056616ca7b59784b9a98ab1f62d251d9dc4a126580790d2500d0e614"} Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.438957 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" event={"ID":"b2f9a2dc-4b04-4209-a427-1467873d3d19","Type":"ContainerStarted","Data":"783f53afffbc6f57dbb01931f9fecb9d46fe6eacd5226d4abe8030af99802f7d"} Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.440480 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" event={"ID":"891b32d7-a00e-4aee-b1a9-11a17e231cf1","Type":"ContainerStarted","Data":"b2420dacca9405502fd4eef6aade4ada0a7344f36172aaf7b392071a1db8bcf0"} Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.441384 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" event={"ID":"9bf061f5-1016-4813-aad6-b50350f6a1c5","Type":"ContainerStarted","Data":"530bc2f4fc7a2855bf3d02fd3136cef7df40d5260437229014c9187edfb40de5"} Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.443320 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"37d5186fb5eae115758d67047412cfc1def8a21875c148dddc28958b2a44062b"} Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.444359 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" event={"ID":"0504ce62-63ec-4224-883e-495a8de219a6","Type":"ContainerStarted","Data":"4c3edb11def7d5f12e74cd82452bd84f679c15c8181c013de2d5bdd4b11b3f3c"} Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.454205 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" event={"ID":"b819f602-ddd5-4a16-b998-6d7d78798681","Type":"ContainerStarted","Data":"8ed3db474c3eca5c6cf5d54a1c1b9215718503b8d00d7094e33620270775348e"} Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.454273 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.455837 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.455903 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.457264 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.457626 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.458331 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2nh4k" podStartSLOduration=2.12970058 podStartE2EDuration="47.458298012s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.767923129 +0000 UTC m=+1014.837439461" lastFinishedPulling="2025-12-05 07:03:46.096520561 +0000 UTC m=+1060.166036893" observedRunningTime="2025-12-05 07:03:46.452187438 +0000 UTC m=+1060.521703770" watchObservedRunningTime="2025-12-05 07:03:46.458298012 +0000 UTC m=+1060.527814344" Dec 05 07:03:46 crc kubenswrapper[4780]: E1205 07:03:46.461573 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" podUID="b819f602-ddd5-4a16-b998-6d7d78798681" Dec 05 07:03:46 crc kubenswrapper[4780]: I1205 07:03:46.461623 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" Dec 05 07:03:46 crc kubenswrapper[4780]: E1205 07:03:46.544681 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" podUID="063c7b0a-5211-4356-9452-e55deeeeb834" Dec 05 07:03:46 crc kubenswrapper[4780]: E1205 07:03:46.554013 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" podUID="7d2e66f9-622c-4723-abd6-d1d9689ac660" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.465285 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" event={"ID":"12b890b6-660a-4a2f-a2cf-2cbce76cafc6","Type":"ContainerStarted","Data":"0bd8910a739d7150a76c2b88a06bcec071fb2c929714db53c1ceb1d39831ce85"} Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.471334 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" event={"ID":"7d2e66f9-622c-4723-abd6-d1d9689ac660","Type":"ContainerStarted","Data":"9b87329322d8a01219af4f21cc25db8e604cec807896baf75f622c3410134883"} Dec 05 07:03:47 crc kubenswrapper[4780]: E1205 07:03:47.473388 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" podUID="7d2e66f9-622c-4723-abd6-d1d9689ac660" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.477383 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" event={"ID":"9bf061f5-1016-4813-aad6-b50350f6a1c5","Type":"ContainerStarted","Data":"1f2e0c43dbb7e7da64f819285a7d46e954fd6d352a176b2f9177ba8fbe1f5254"} Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.477960 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.482632 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" event={"ID":"d3c6c892-e943-45a9-bda7-63fbae6bc3c1","Type":"ContainerStarted","Data":"1c27cfbf600d39eeeedd21a64c3fc7a5c1c75ae0bcce3f6998bbd96ba2d283d6"} Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.489791 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" event={"ID":"063c7b0a-5211-4356-9452-e55deeeeb834","Type":"ContainerStarted","Data":"15c5854d6c0cafd3570c2b55e975a878f593fb431f9e6390fbb2462d37fd2db2"} Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.493351 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kf66r" podStartSLOduration=3.0639194180000002 podStartE2EDuration="48.493334184s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.68030788 +0000 UTC m=+1014.749824212" lastFinishedPulling="2025-12-05 07:03:46.109722646 +0000 UTC m=+1060.179238978" observedRunningTime="2025-12-05 07:03:47.490968641 +0000 UTC m=+1061.560484973" watchObservedRunningTime="2025-12-05 07:03:47.493334184 +0000 UTC m=+1061.562850516" Dec 05 07:03:47 crc kubenswrapper[4780]: E1205 07:03:47.494171 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" podUID="063c7b0a-5211-4356-9452-e55deeeeb834" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.512850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" event={"ID":"891b32d7-a00e-4aee-b1a9-11a17e231cf1","Type":"ContainerStarted","Data":"0e319e27470e220275a2d13c52bbb5ff8c849ec9c922b36010e9fc23c5403270"} Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.513869 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.580395 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" event={"ID":"0504ce62-63ec-4224-883e-495a8de219a6","Type":"ContainerStarted","Data":"336271da32a0b48e62e1781d3b5aec6b44b8573b991544e70bff1f168b636da6"} Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.581252 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.637791 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" podStartSLOduration=21.873806216 podStartE2EDuration="48.637772501s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:01.198396576 +0000 UTC m=+1015.267912908" lastFinishedPulling="2025-12-05 07:03:27.962362861 +0000 UTC m=+1042.031879193" observedRunningTime="2025-12-05 07:03:47.54013725 +0000 UTC m=+1061.609653582" watchObservedRunningTime="2025-12-05 07:03:47.637772501 +0000 UTC m=+1061.707288833" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.648184 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" event={"ID":"b2f9a2dc-4b04-4209-a427-1467873d3d19","Type":"ContainerStarted","Data":"2c23adbf44d620472c3b5b5dc27ac65463c9f8df42ea87e39780f2f501b07198"} Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.648967 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.681254 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" event={"ID":"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54","Type":"ContainerStarted","Data":"292d6ca93c88fc07b43881c3fc75fc5acc9195ebe4bb400b04f71e1bb9adab11"} Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.725354 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-4vfh9" podStartSLOduration=3.2972067369999998 podStartE2EDuration="48.725333922s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.669225929 +0000 UTC m=+1014.738742251" lastFinishedPulling="2025-12-05 07:03:46.097353104 +0000 UTC m=+1060.166869436" observedRunningTime="2025-12-05 07:03:47.723248665 +0000 UTC m=+1061.792764997" watchObservedRunningTime="2025-12-05 07:03:47.725333922 +0000 UTC m=+1061.794850254" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.802095 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" podStartSLOduration=29.024348719 podStartE2EDuration="48.802065062s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:01.192400408 +0000 UTC m=+1015.261916740" lastFinishedPulling="2025-12-05 07:03:20.970116751 +0000 UTC m=+1035.039633083" observedRunningTime="2025-12-05 07:03:47.793242964 +0000 UTC m=+1061.862759296" watchObservedRunningTime="2025-12-05 07:03:47.802065062 +0000 UTC m=+1061.871581384" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.844329 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" podStartSLOduration=37.495394537 podStartE2EDuration="49.844307805s" podCreationTimestamp="2025-12-05 07:02:58 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.639950881 +0000 UTC m=+1014.709467213" lastFinishedPulling="2025-12-05 07:03:12.988864149 +0000 UTC m=+1027.058380481" observedRunningTime="2025-12-05 07:03:47.841530581 +0000 UTC m=+1061.911046913" watchObservedRunningTime="2025-12-05 07:03:47.844307805 +0000 UTC m=+1061.913824137" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.889368 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" podStartSLOduration=4.6063287 podStartE2EDuration="48.889347924s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:01.180192848 +0000 UTC m=+1015.249709180" lastFinishedPulling="2025-12-05 07:03:45.463212072 +0000 UTC m=+1059.532728404" observedRunningTime="2025-12-05 07:03:47.886407585 +0000 UTC m=+1061.955923917" watchObservedRunningTime="2025-12-05 07:03:47.889347924 +0000 UTC m=+1061.958864256" Dec 05 07:03:47 crc kubenswrapper[4780]: I1205 07:03:47.953846 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" podStartSLOduration=22.187756585 podStartE2EDuration="48.953818425s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:01.196196358 +0000 UTC m=+1015.265712690" lastFinishedPulling="2025-12-05 07:03:27.962258198 +0000 UTC m=+1042.031774530" observedRunningTime="2025-12-05 07:03:47.940858927 +0000 UTC m=+1062.010375259" watchObservedRunningTime="2025-12-05 07:03:47.953818425 +0000 UTC m=+1062.023334757" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.702344 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" event={"ID":"c032b9cc-5da5-4011-a397-e564fedcf04d","Type":"ContainerStarted","Data":"9463eeabb6570a3ceb3a00918b19129c0c9d0ff190ec8c9f76d1498a802ed3d1"} Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.703967 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.706652 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.712082 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7fj5f" event={"ID":"d413e91e-0735-412e-8614-bd86a466267b","Type":"ContainerStarted","Data":"276f46147f84a1d2db85441d29f711641ff59d0793f3a141eb618d20612f72ae"} Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.714122 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" event={"ID":"4f817d18-c711-48ff-891e-f5f59fe1ec5f","Type":"ContainerStarted","Data":"13f04fcde94c211378353ef9ed37670027ae14a53fb022df5e20dc9e808060db"} Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.714424 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.716390 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.718353 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" event={"ID":"fb14f9a1-3b2e-4c17-a750-b1188fff5b40","Type":"ContainerStarted","Data":"032c83943eea6a921e0a881e708be3f4ded2eb78e7c696655614d7be286ec153"} Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.718865 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.722485 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" event={"ID":"1af7f32d-6c1b-4ed2-8511-4ca770bba111","Type":"ContainerStarted","Data":"5169015900d254a97529edc9be142cfe946dda117862b40212726813da42b4f6"} Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.725670 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.726743 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" event={"ID":"e4d45329-536a-48cf-932c-22669f486a7c","Type":"ContainerStarted","Data":"88f139c8386b9093aa30fdc5d24a83a9c213055d074c69783a94a36ccbccfb85"} Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.727039 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.728738 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.728855 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" event={"ID":"2f86d305-a39d-42ec-9a73-067610752615","Type":"ContainerStarted","Data":"762f86346847384303dbfa32809777cb1691545d715df02efb8b05c0461a38d1"} Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.729430 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.731294 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" event={"ID":"1f3fb0ee-0381-48df-91b7-1a72bf5acd62","Type":"ContainerStarted","Data":"7daaf198d762558ffd015290440115671953e3640fdaab683148d982214918bc"} Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.731865 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.732815 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.733909 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.735026 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" event={"ID":"972a9e29-9c48-4d8e-9390-e91c9b422af8","Type":"ContainerStarted","Data":"b7a430f04ab0385d0418755685221e8dd377538290e2c6f76a83d96a0f5e3935"} Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.737942 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-jfzgd" podStartSLOduration=37.674798345 podStartE2EDuration="49.737925252s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.924224471 +0000 UTC m=+1014.993740803" lastFinishedPulling="2025-12-05 07:03:12.987351388 +0000 UTC m=+1027.056867710" observedRunningTime="2025-12-05 07:03:48.719552669 +0000 UTC m=+1062.789069001" watchObservedRunningTime="2025-12-05 07:03:48.737925252 +0000 UTC m=+1062.807441584" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.747124 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" event={"ID":"8450df13-5e1a-4f4e-86dc-b1c841845554","Type":"ContainerStarted","Data":"92ec45a6882274ec39a7ba9a618d9dff00b3d5b82b404977bbe8925f789fd187"} Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.758802 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" Dec 05 07:03:48 crc kubenswrapper[4780]: E1205 07:03:48.762268 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" podUID="7d2e66f9-622c-4723-abd6-d1d9689ac660" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.788034 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.789987 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9zlhl" podStartSLOduration=37.483027552 podStartE2EDuration="49.789968458s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.669108376 +0000 UTC m=+1014.738624698" lastFinishedPulling="2025-12-05 07:03:12.976049271 +0000 UTC m=+1027.045565604" observedRunningTime="2025-12-05 07:03:48.759796399 +0000 UTC m=+1062.829312731" watchObservedRunningTime="2025-12-05 07:03:48.789968458 +0000 UTC m=+1062.859484790" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.803675 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-n7pzh" podStartSLOduration=37.755254094 podStartE2EDuration="49.803655136s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.932820326 +0000 UTC m=+1015.002336668" lastFinishedPulling="2025-12-05 07:03:12.981221358 +0000 UTC m=+1027.050737710" observedRunningTime="2025-12-05 07:03:48.799500095 +0000 UTC m=+1062.869016427" watchObservedRunningTime="2025-12-05 07:03:48.803655136 +0000 UTC m=+1062.873171468" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.873927 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-gsggt" podStartSLOduration=38.030433847 podStartE2EDuration="49.872780711s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:01.177039685 +0000 UTC m=+1015.246556017" lastFinishedPulling="2025-12-05 07:03:13.019386539 +0000 UTC m=+1027.088902881" observedRunningTime="2025-12-05 07:03:48.859150376 +0000 UTC m=+1062.928666708" watchObservedRunningTime="2025-12-05 07:03:48.872780711 +0000 UTC m=+1062.942297063" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.902480 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-7f7qk" podStartSLOduration=37.703136676 podStartE2EDuration="49.902458518s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.807429795 +0000 UTC m=+1014.876946127" lastFinishedPulling="2025-12-05 07:03:13.006751637 +0000 UTC m=+1027.076267969" observedRunningTime="2025-12-05 07:03:48.898112481 +0000 UTC m=+1062.967628813" watchObservedRunningTime="2025-12-05 07:03:48.902458518 +0000 UTC m=+1062.971974850" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.917162 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-mmn9w" podStartSLOduration=37.831129043 podStartE2EDuration="49.917143652s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.931257845 +0000 UTC m=+1015.000774177" lastFinishedPulling="2025-12-05 07:03:13.017272444 +0000 UTC m=+1027.086788786" observedRunningTime="2025-12-05 07:03:48.91483919 +0000 UTC m=+1062.984355522" watchObservedRunningTime="2025-12-05 07:03:48.917143652 +0000 UTC m=+1062.986659994" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.937086 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-6j9rd" podStartSLOduration=37.558653106 podStartE2EDuration="49.937063637s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.602610421 +0000 UTC m=+1014.672126753" lastFinishedPulling="2025-12-05 07:03:12.981020952 +0000 UTC m=+1027.050537284" observedRunningTime="2025-12-05 07:03:48.935627868 +0000 UTC m=+1063.005144210" watchObservedRunningTime="2025-12-05 07:03:48.937063637 +0000 UTC m=+1063.006579969" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.958739 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-vjqff" podStartSLOduration=37.857073708 podStartE2EDuration="50.958723889s" podCreationTimestamp="2025-12-05 07:02:58 +0000 UTC" firstStartedPulling="2025-12-05 07:02:59.853823201 +0000 UTC m=+1013.923339533" lastFinishedPulling="2025-12-05 07:03:12.955473382 +0000 UTC m=+1027.024989714" observedRunningTime="2025-12-05 07:03:48.957404683 +0000 UTC m=+1063.026921025" watchObservedRunningTime="2025-12-05 07:03:48.958723889 +0000 UTC m=+1063.028240221" Dec 05 07:03:48 crc kubenswrapper[4780]: I1205 07:03:48.980083 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pw8g2" podStartSLOduration=37.927658055 podStartE2EDuration="49.980060051s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.935068356 +0000 UTC m=+1015.004584688" lastFinishedPulling="2025-12-05 07:03:12.987470352 +0000 UTC m=+1027.056986684" observedRunningTime="2025-12-05 07:03:48.976435773 +0000 UTC m=+1063.045952105" watchObservedRunningTime="2025-12-05 07:03:48.980060051 +0000 UTC m=+1063.049576373" Dec 05 07:03:49 crc kubenswrapper[4780]: I1205 07:03:49.759036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" event={"ID":"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54","Type":"ContainerStarted","Data":"8a5ea7b176dcc86683b55f39a28b3735ddd69d753c5f7e9460b54e075e31c76f"} Dec 05 07:03:50 crc kubenswrapper[4780]: I1205 07:03:50.766705 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" event={"ID":"edd42acf-bc82-40b8-bae3-c8ce3f8dcd54","Type":"ContainerStarted","Data":"dcbddd560baab86115725bd48aae71187c75781ccca938ff18df0d737e467ccb"} Dec 05 07:03:50 crc kubenswrapper[4780]: I1205 07:03:50.788168 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" podStartSLOduration=48.829569679 podStartE2EDuration="51.788137443s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:46.551063072 +0000 UTC m=+1060.620579404" lastFinishedPulling="2025-12-05 07:03:49.509630836 +0000 UTC m=+1063.579147168" observedRunningTime="2025-12-05 07:03:50.783417467 +0000 UTC m=+1064.852933789" watchObservedRunningTime="2025-12-05 07:03:50.788137443 +0000 UTC m=+1064.857653765" Dec 05 07:03:51 crc kubenswrapper[4780]: I1205 07:03:51.281349 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:03:58 crc kubenswrapper[4780]: E1205 07:03:58.166626 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" podUID="063c7b0a-5211-4356-9452-e55deeeeb834" Dec 05 07:03:59 crc kubenswrapper[4780]: I1205 07:03:59.906397 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xbdxn" Dec 05 07:03:59 crc kubenswrapper[4780]: I1205 07:03:59.946185 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-sl2j8" Dec 05 07:03:59 crc kubenswrapper[4780]: I1205 07:03:59.981198 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-r5j4j" Dec 05 07:04:00 crc kubenswrapper[4780]: I1205 07:04:00.121008 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-dr6pg" Dec 05 07:04:00 crc kubenswrapper[4780]: E1205 07:04:00.154788 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" podUID="908cf347-4346-4eb1-996f-b214491207e0" Dec 05 07:04:00 crc kubenswrapper[4780]: E1205 07:04:00.157255 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" podUID="b819f602-ddd5-4a16-b998-6d7d78798681" Dec 05 07:04:01 crc kubenswrapper[4780]: I1205 07:04:01.286715 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-9ntxt" Dec 05 07:04:03 crc kubenswrapper[4780]: I1205 07:04:03.874427 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" event={"ID":"7d2e66f9-622c-4723-abd6-d1d9689ac660","Type":"ContainerStarted","Data":"1bf53473e1f01e83100559d089fe6880e31930893c8a7268b95639d4751d196d"} Dec 05 07:04:03 crc kubenswrapper[4780]: I1205 07:04:03.875642 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:04:03 crc kubenswrapper[4780]: I1205 07:04:03.904467 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" podStartSLOduration=19.367592235 podStartE2EDuration="1m4.904445051s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:17.885765272 +0000 UTC m=+1031.955281624" lastFinishedPulling="2025-12-05 07:04:03.422618108 +0000 UTC m=+1077.492134440" observedRunningTime="2025-12-05 07:04:03.901791869 +0000 UTC m=+1077.971308211" watchObservedRunningTime="2025-12-05 07:04:03.904445051 +0000 UTC m=+1077.973961383" Dec 05 07:04:12 crc kubenswrapper[4780]: I1205 07:04:12.938544 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" event={"ID":"b819f602-ddd5-4a16-b998-6d7d78798681","Type":"ContainerStarted","Data":"bd97bc1ddbe5fbdcbeeefdf1b02dfe69a5206f017e76a7f6019f52d4fb0b3432"} Dec 05 07:04:12 crc kubenswrapper[4780]: I1205 07:04:12.939386 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" Dec 05 07:04:12 crc kubenswrapper[4780]: I1205 07:04:12.940313 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" event={"ID":"063c7b0a-5211-4356-9452-e55deeeeb834","Type":"ContainerStarted","Data":"358c2d245370f662f0fce2286e5866011b99f32707a0352a0323c6a2d558ecff"} Dec 05 07:04:12 crc kubenswrapper[4780]: I1205 07:04:12.940506 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" Dec 05 07:04:12 crc kubenswrapper[4780]: I1205 07:04:12.955564 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" podStartSLOduration=3.126412698 podStartE2EDuration="1m13.95554711s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.943535448 +0000 UTC m=+1015.013051780" lastFinishedPulling="2025-12-05 07:04:11.77266986 +0000 UTC m=+1085.842186192" observedRunningTime="2025-12-05 07:04:12.954077021 +0000 UTC m=+1087.023593363" watchObservedRunningTime="2025-12-05 07:04:12.95554711 +0000 UTC m=+1087.025063442" Dec 05 07:04:12 crc kubenswrapper[4780]: I1205 07:04:12.993937 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" podStartSLOduration=2.40427825 podStartE2EDuration="1m13.99391203s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:00.967736373 +0000 UTC m=+1015.037252705" lastFinishedPulling="2025-12-05 07:04:12.557370163 +0000 UTC m=+1086.626886485" observedRunningTime="2025-12-05 07:04:12.971278463 +0000 UTC m=+1087.040794865" watchObservedRunningTime="2025-12-05 07:04:12.99391203 +0000 UTC m=+1087.063428362" Dec 05 07:04:15 crc kubenswrapper[4780]: I1205 07:04:15.438785 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5mb6tl" Dec 05 07:04:16 crc kubenswrapper[4780]: I1205 07:04:16.970187 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" event={"ID":"908cf347-4346-4eb1-996f-b214491207e0","Type":"ContainerStarted","Data":"c63226c514178c8ee03fdffa3e62f62bd95b47719d20e7232b8c2e508e5e011d"} Dec 05 07:04:17 crc kubenswrapper[4780]: I1205 07:04:17.992839 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xmn7v" podStartSLOduration=4.616927896 podStartE2EDuration="1m18.992821051s" podCreationTimestamp="2025-12-05 07:02:59 +0000 UTC" firstStartedPulling="2025-12-05 07:03:01.177057115 +0000 UTC m=+1015.246573447" lastFinishedPulling="2025-12-05 07:04:15.55295027 +0000 UTC m=+1089.622466602" observedRunningTime="2025-12-05 07:04:17.987173289 +0000 UTC m=+1092.056689651" watchObservedRunningTime="2025-12-05 07:04:17.992821051 +0000 UTC m=+1092.062337383" Dec 05 07:04:19 crc kubenswrapper[4780]: I1205 07:04:19.501640 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-5pn44" Dec 05 07:04:19 crc kubenswrapper[4780]: I1205 07:04:19.767746 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-7plfw" Dec 05 07:04:37 crc kubenswrapper[4780]: I1205 07:04:37.960769 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-d8ftb"] Dec 05 07:04:37 crc kubenswrapper[4780]: I1205 07:04:37.963502 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" Dec 05 07:04:37 crc kubenswrapper[4780]: I1205 07:04:37.965458 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 05 07:04:37 crc kubenswrapper[4780]: I1205 07:04:37.965791 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-z5cgh" Dec 05 07:04:37 crc kubenswrapper[4780]: I1205 07:04:37.965935 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 05 07:04:37 crc kubenswrapper[4780]: I1205 07:04:37.966123 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 05 07:04:37 crc kubenswrapper[4780]: I1205 07:04:37.978848 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-d8ftb"] Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.021178 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-config\") pod \"dnsmasq-dns-5cd484bb89-d8ftb\" (UID: \"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8\") " pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.021245 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wx9\" (UniqueName: \"kubernetes.io/projected/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-kube-api-access-p7wx9\") pod \"dnsmasq-dns-5cd484bb89-d8ftb\" (UID: \"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8\") " pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.021348 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-567c455747-2jt5q"] Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.022514 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.025667 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.026569 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567c455747-2jt5q"] Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.122754 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-config\") pod \"dnsmasq-dns-567c455747-2jt5q\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.122812 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-dns-svc\") pod \"dnsmasq-dns-567c455747-2jt5q\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.122843 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-config\") pod \"dnsmasq-dns-5cd484bb89-d8ftb\" (UID: \"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8\") " pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.122869 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvvqn\" (UniqueName: \"kubernetes.io/projected/07896b43-77e1-4aeb-8cde-f5b8dd17a900-kube-api-access-hvvqn\") pod \"dnsmasq-dns-567c455747-2jt5q\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.122896 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wx9\" (UniqueName: \"kubernetes.io/projected/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-kube-api-access-p7wx9\") pod \"dnsmasq-dns-5cd484bb89-d8ftb\" (UID: \"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8\") " pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.123811 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-config\") pod \"dnsmasq-dns-5cd484bb89-d8ftb\" (UID: \"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8\") " pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.142690 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wx9\" (UniqueName: \"kubernetes.io/projected/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-kube-api-access-p7wx9\") pod \"dnsmasq-dns-5cd484bb89-d8ftb\" (UID: \"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8\") " pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.223860 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-config\") pod \"dnsmasq-dns-567c455747-2jt5q\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.223972 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-dns-svc\") pod \"dnsmasq-dns-567c455747-2jt5q\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.224004 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvvqn\" (UniqueName: \"kubernetes.io/projected/07896b43-77e1-4aeb-8cde-f5b8dd17a900-kube-api-access-hvvqn\") pod \"dnsmasq-dns-567c455747-2jt5q\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.224829 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-config\") pod \"dnsmasq-dns-567c455747-2jt5q\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.224899 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-dns-svc\") pod \"dnsmasq-dns-567c455747-2jt5q\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.239845 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvvqn\" (UniqueName: \"kubernetes.io/projected/07896b43-77e1-4aeb-8cde-f5b8dd17a900-kube-api-access-hvvqn\") pod \"dnsmasq-dns-567c455747-2jt5q\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.284458 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.338115 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.756604 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-d8ftb"] Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.786140 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567c455747-2jt5q"] Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.807275 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-cnh7x"] Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.808438 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.826152 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-cnh7x"] Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.862304 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567c455747-2jt5q"] Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.936745 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7slp\" (UniqueName: \"kubernetes.io/projected/b7024867-1485-40d6-8054-06ec596a0585-kube-api-access-g7slp\") pod \"dnsmasq-dns-bc4b48fc9-cnh7x\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.936860 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-config\") pod \"dnsmasq-dns-bc4b48fc9-cnh7x\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:38 crc kubenswrapper[4780]: I1205 07:04:38.936918 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-dns-svc\") pod \"dnsmasq-dns-bc4b48fc9-cnh7x\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.038149 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7slp\" (UniqueName: \"kubernetes.io/projected/b7024867-1485-40d6-8054-06ec596a0585-kube-api-access-g7slp\") pod \"dnsmasq-dns-bc4b48fc9-cnh7x\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.038293 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-config\") pod \"dnsmasq-dns-bc4b48fc9-cnh7x\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.038316 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-dns-svc\") pod \"dnsmasq-dns-bc4b48fc9-cnh7x\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.039377 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-config\") pod \"dnsmasq-dns-bc4b48fc9-cnh7x\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.039378 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-dns-svc\") pod \"dnsmasq-dns-bc4b48fc9-cnh7x\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.056495 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7slp\" (UniqueName: \"kubernetes.io/projected/b7024867-1485-40d6-8054-06ec596a0585-kube-api-access-g7slp\") pod \"dnsmasq-dns-bc4b48fc9-cnh7x\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.114297 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c455747-2jt5q" event={"ID":"07896b43-77e1-4aeb-8cde-f5b8dd17a900","Type":"ContainerStarted","Data":"28d36b2af85baafbb9d1159ccc07be8a66e9d65413a0aa7444ad5210d03e539b"} Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.116222 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" event={"ID":"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8","Type":"ContainerStarted","Data":"3fb86f5bf70228b09e01d2f55f43528c2f36943b4dfe7e0a5a9414f6c86c5c3b"} Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.126872 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.600864 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-cnh7x"] Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.614830 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-d8ftb"] Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.651562 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb666b895-zt6bw"] Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.654380 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.668764 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-zt6bw"] Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.749428 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgsz\" (UniqueName: \"kubernetes.io/projected/ae7a1567-97ed-4968-90bb-4dab84011023-kube-api-access-wzgsz\") pod \"dnsmasq-dns-cb666b895-zt6bw\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.749820 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-config\") pod \"dnsmasq-dns-cb666b895-zt6bw\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.749871 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-dns-svc\") pod \"dnsmasq-dns-cb666b895-zt6bw\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.851532 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzgsz\" (UniqueName: \"kubernetes.io/projected/ae7a1567-97ed-4968-90bb-4dab84011023-kube-api-access-wzgsz\") pod \"dnsmasq-dns-cb666b895-zt6bw\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.851625 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-config\") pod \"dnsmasq-dns-cb666b895-zt6bw\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.851674 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-dns-svc\") pod \"dnsmasq-dns-cb666b895-zt6bw\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.852676 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-dns-svc\") pod \"dnsmasq-dns-cb666b895-zt6bw\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.853554 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-config\") pod \"dnsmasq-dns-cb666b895-zt6bw\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.888009 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzgsz\" (UniqueName: \"kubernetes.io/projected/ae7a1567-97ed-4968-90bb-4dab84011023-kube-api-access-wzgsz\") pod \"dnsmasq-dns-cb666b895-zt6bw\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.984539 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.985812 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.987246 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.988762 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.988982 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.989184 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.989287 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9rwnp" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.989956 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.990173 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 05 07:04:39 crc kubenswrapper[4780]: I1205 07:04:39.990333 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.006743 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054314 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t4xq\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-kube-api-access-4t4xq\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054346 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054469 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5032d09-8298-4941-8b4b-0f24a57b8ced-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054546 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054581 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054628 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5032d09-8298-4941-8b4b-0f24a57b8ced-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054656 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054682 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054781 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.054818 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.127325 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" event={"ID":"b7024867-1485-40d6-8054-06ec596a0585","Type":"ContainerStarted","Data":"aeedc0e139310673a9014b30fd0cf10d4a416cb6a10bf23208870165fe3917c1"} Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.155713 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.155761 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.155786 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.155808 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t4xq\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-kube-api-access-4t4xq\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.155850 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.155876 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5032d09-8298-4941-8b4b-0f24a57b8ced-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.155919 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.155936 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.155960 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5032d09-8298-4941-8b4b-0f24a57b8ced-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.155979 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.156003 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.156966 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.157285 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.158012 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.159469 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.160440 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.160750 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.166014 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5032d09-8298-4941-8b4b-0f24a57b8ced-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.171343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5032d09-8298-4941-8b4b-0f24a57b8ced-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.172720 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.179277 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.179453 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t4xq\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-kube-api-access-4t4xq\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.193854 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.315860 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.512107 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-zt6bw"] Dec 05 07:04:40 crc kubenswrapper[4780]: W1205 07:04:40.555127 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae7a1567_97ed_4968_90bb_4dab84011023.slice/crio-ac82f69c447e58ce91985279a6778ce04b850a96f3e1b57a428c85a2616d0675 WatchSource:0}: Error finding container ac82f69c447e58ce91985279a6778ce04b850a96f3e1b57a428c85a2616d0675: Status 404 returned error can't find the container with id ac82f69c447e58ce91985279a6778ce04b850a96f3e1b57a428c85a2616d0675 Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.810288 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.814445 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.823393 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.824140 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.827393 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.828756 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.828878 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bh575" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.829012 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.829169 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.831700 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.860293 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.865714 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.865757 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.865774 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.865816 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.865839 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.865858 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.865882 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.865925 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4k6p\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-kube-api-access-w4k6p\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.865947 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.865965 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.866019 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: W1205 07:04:40.873931 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5032d09_8298_4941_8b4b_0f24a57b8ced.slice/crio-c3a5ea690980139eefba2d05b84901a873815d1ba48bd42651718da8ce477c33 WatchSource:0}: Error finding container c3a5ea690980139eefba2d05b84901a873815d1ba48bd42651718da8ce477c33: Status 404 returned error can't find the container with id c3a5ea690980139eefba2d05b84901a873815d1ba48bd42651718da8ce477c33 Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.967504 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.967590 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.967624 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.968070 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.968351 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.968403 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.968456 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.968484 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.969486 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.969869 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4k6p\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-kube-api-access-w4k6p\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.969921 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.969954 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.969988 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.969514 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.969413 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.969784 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.971101 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.975577 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.975644 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.988493 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.988515 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:40 crc kubenswrapper[4780]: I1205 07:04:40.993061 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4k6p\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-kube-api-access-w4k6p\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:41 crc kubenswrapper[4780]: I1205 07:04:41.009335 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:41 crc kubenswrapper[4780]: I1205 07:04:41.158305 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:04:41 crc kubenswrapper[4780]: I1205 07:04:41.159900 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5032d09-8298-4941-8b4b-0f24a57b8ced","Type":"ContainerStarted","Data":"c3a5ea690980139eefba2d05b84901a873815d1ba48bd42651718da8ce477c33"} Dec 05 07:04:41 crc kubenswrapper[4780]: I1205 07:04:41.165691 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb666b895-zt6bw" event={"ID":"ae7a1567-97ed-4968-90bb-4dab84011023","Type":"ContainerStarted","Data":"ac82f69c447e58ce91985279a6778ce04b850a96f3e1b57a428c85a2616d0675"} Dec 05 07:04:41 crc kubenswrapper[4780]: I1205 07:04:41.997502 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.181122 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85","Type":"ContainerStarted","Data":"29e579b44d96c57574bd38699ee2194136d3ab35d0b7c03c2608a355ee26cf23"} Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.558570 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.559841 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.566988 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.567872 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zv8sw" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.568063 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.568946 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.570801 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.583609 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.604640 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.605677 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-default\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.605759 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.605778 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.605816 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45kr\" (UniqueName: \"kubernetes.io/projected/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kube-api-access-l45kr\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.605850 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.605922 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.605944 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kolla-config\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.707615 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.707780 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-default\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.707906 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.707943 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.707987 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l45kr\" (UniqueName: \"kubernetes.io/projected/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kube-api-access-l45kr\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.708032 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.708092 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kolla-config\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.708115 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.708804 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.709123 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-default\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.709456 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.710295 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kolla-config\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.712529 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.715379 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.731547 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l45kr\" (UniqueName: \"kubernetes.io/projected/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kube-api-access-l45kr\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.763399 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.769105 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " pod="openstack/openstack-galera-0" Dec 05 07:04:42 crc kubenswrapper[4780]: I1205 07:04:42.896257 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.658950 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.806473 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.809481 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.813971 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.814411 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ql9sm" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.814735 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.818488 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.825175 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.846017 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.846107 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.846181 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.846226 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.846281 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.846309 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.846333 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkqsv\" (UniqueName: \"kubernetes.io/projected/621ea4dd-7bc5-4404-9369-1cd99335155d-kube-api-access-mkqsv\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.846362 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.950310 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.950370 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.950414 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.950432 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkqsv\" (UniqueName: \"kubernetes.io/projected/621ea4dd-7bc5-4404-9369-1cd99335155d-kube-api-access-mkqsv\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.950453 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.950489 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.950537 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.950598 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.950722 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.951152 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.951709 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.952434 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.954029 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.956375 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.983564 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.989716 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.991592 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 07:04:43 crc kubenswrapper[4780]: I1205 07:04:43.994799 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:43.999237 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-b5fld" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:43.999350 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkqsv\" (UniqueName: \"kubernetes.io/projected/621ea4dd-7bc5-4404-9369-1cd99335155d-kube-api-access-mkqsv\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:43.999480 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:43.999556 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.037043 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.055056 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-config-data\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.055099 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.055184 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kolla-config\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.055235 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krgbp\" (UniqueName: \"kubernetes.io/projected/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kube-api-access-krgbp\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.055259 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.157693 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krgbp\" (UniqueName: \"kubernetes.io/projected/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kube-api-access-krgbp\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.157833 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.157940 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-config-data\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.157982 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.158254 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kolla-config\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.163376 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-config-data\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.164710 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.164860 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.178923 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.180742 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kolla-config\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.194100 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krgbp\" (UniqueName: \"kubernetes.io/projected/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kube-api-access-krgbp\") pod \"memcached-0\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " pod="openstack/memcached-0" Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.253539 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3","Type":"ContainerStarted","Data":"f1382d25f679be228ed9947a98d27d54904acd40002add8d03cf765e4aa55222"} Dec 05 07:04:44 crc kubenswrapper[4780]: I1205 07:04:44.421647 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 07:04:45 crc kubenswrapper[4780]: I1205 07:04:45.051965 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 07:04:45 crc kubenswrapper[4780]: I1205 07:04:45.085965 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 07:04:45 crc kubenswrapper[4780]: W1205 07:04:45.092549 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod621ea4dd_7bc5_4404_9369_1cd99335155d.slice/crio-6c4fdbdea601ec90119f264aeaaba1beb2f1841bc6041f4f23023ec7a91c260f WatchSource:0}: Error finding container 6c4fdbdea601ec90119f264aeaaba1beb2f1841bc6041f4f23023ec7a91c260f: Status 404 returned error can't find the container with id 6c4fdbdea601ec90119f264aeaaba1beb2f1841bc6041f4f23023ec7a91c260f Dec 05 07:04:45 crc kubenswrapper[4780]: I1205 07:04:45.260466 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7c32a219-7b72-4302-8cc4-b9f11a672e8d","Type":"ContainerStarted","Data":"c77d4e84f7126a327e580a287c27add1511a08fcf8cb3e6afb03dac9e751e6ff"} Dec 05 07:04:45 crc kubenswrapper[4780]: I1205 07:04:45.261410 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"621ea4dd-7bc5-4404-9369-1cd99335155d","Type":"ContainerStarted","Data":"6c4fdbdea601ec90119f264aeaaba1beb2f1841bc6041f4f23023ec7a91c260f"} Dec 05 07:04:46 crc kubenswrapper[4780]: I1205 07:04:46.029224 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:04:46 crc kubenswrapper[4780]: I1205 07:04:46.030349 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 07:04:46 crc kubenswrapper[4780]: I1205 07:04:46.032955 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-8vtsr" Dec 05 07:04:46 crc kubenswrapper[4780]: I1205 07:04:46.035472 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:04:46 crc kubenswrapper[4780]: I1205 07:04:46.096551 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsq58\" (UniqueName: \"kubernetes.io/projected/e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1-kube-api-access-vsq58\") pod \"kube-state-metrics-0\" (UID: \"e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1\") " pod="openstack/kube-state-metrics-0" Dec 05 07:04:46 crc kubenswrapper[4780]: I1205 07:04:46.203993 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsq58\" (UniqueName: \"kubernetes.io/projected/e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1-kube-api-access-vsq58\") pod \"kube-state-metrics-0\" (UID: \"e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1\") " pod="openstack/kube-state-metrics-0" Dec 05 07:04:46 crc kubenswrapper[4780]: I1205 07:04:46.245702 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsq58\" (UniqueName: \"kubernetes.io/projected/e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1-kube-api-access-vsq58\") pod \"kube-state-metrics-0\" (UID: \"e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1\") " pod="openstack/kube-state-metrics-0" Dec 05 07:04:46 crc kubenswrapper[4780]: I1205 07:04:46.369243 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 07:04:47 crc kubenswrapper[4780]: I1205 07:04:47.145502 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.445542 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fs2vs"] Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.446575 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.450698 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.450954 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.451072 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nrvxm" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.473853 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fs2vs"] Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.552705 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lq2sf"] Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.556401 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.558722 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-log-ovn\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.558931 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.558984 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-combined-ca-bundle\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.559135 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-scripts\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.560284 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-ovn-controller-tls-certs\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.560341 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run-ovn\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.560361 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzfgz\" (UniqueName: \"kubernetes.io/projected/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-kube-api-access-nzfgz\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.573846 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lq2sf"] Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662136 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-etc-ovs\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662219 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-run\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662250 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-lib\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662280 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-log-ovn\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662320 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662341 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw6v7\" (UniqueName: \"kubernetes.io/projected/52793d91-2b27-4926-9293-78f555401415-kube-api-access-qw6v7\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662365 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-combined-ca-bundle\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662407 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-scripts\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662445 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run-ovn\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662464 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-ovn-controller-tls-certs\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662487 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzfgz\" (UniqueName: \"kubernetes.io/projected/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-kube-api-access-nzfgz\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662509 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.662532 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-log\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.664548 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-log-ovn\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.664977 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run-ovn\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.666873 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-scripts\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.667054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.676555 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-combined-ca-bundle\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.682851 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-ovn-controller-tls-certs\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.689016 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzfgz\" (UniqueName: \"kubernetes.io/projected/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-kube-api-access-nzfgz\") pod \"ovn-controller-fs2vs\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.763754 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.763803 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-log\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.763845 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-etc-ovs\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.763918 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-run\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.763941 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-lib\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.763970 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw6v7\" (UniqueName: \"kubernetes.io/projected/52793d91-2b27-4926-9293-78f555401415-kube-api-access-qw6v7\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.764416 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-lib\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.764506 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-run\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.764528 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-log\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.764824 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-etc-ovs\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.771576 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.786463 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw6v7\" (UniqueName: \"kubernetes.io/projected/52793d91-2b27-4926-9293-78f555401415-kube-api-access-qw6v7\") pod \"ovn-controller-ovs-lq2sf\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.824263 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fs2vs" Dec 05 07:04:48 crc kubenswrapper[4780]: I1205 07:04:48.879513 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.403049 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.404726 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.407824 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-vj5fh" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.410929 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.411909 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.412033 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.412550 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.413280 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.506966 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.507012 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.507066 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-config\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.507156 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.507299 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.507353 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.507426 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.507603 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55s2d\" (UniqueName: \"kubernetes.io/projected/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-kube-api-access-55s2d\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.608873 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.608929 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.608957 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-config\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.608978 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.609012 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.609030 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.609052 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.609089 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55s2d\" (UniqueName: \"kubernetes.io/projected/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-kube-api-access-55s2d\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.609685 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.611262 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.611361 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-config\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.611576 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.614555 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.616505 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.617038 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.625778 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55s2d\" (UniqueName: \"kubernetes.io/projected/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-kube-api-access-55s2d\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.633831 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:50 crc kubenswrapper[4780]: I1205 07:04:50.731003 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.442809 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.446085 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.449430 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.449757 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.450137 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.450358 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-cmbwm" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.450394 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.567460 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-config\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.567519 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.567554 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.567579 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9zgd\" (UniqueName: \"kubernetes.io/projected/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-kube-api-access-r9zgd\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.567609 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.567691 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.567743 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.567791 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.669209 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.669354 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-config\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.669384 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.669427 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.669450 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9zgd\" (UniqueName: \"kubernetes.io/projected/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-kube-api-access-r9zgd\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.669475 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.669532 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.669591 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.669988 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.670121 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.670357 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-config\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.670428 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.674627 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.674698 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.680617 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.685991 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9zgd\" (UniqueName: \"kubernetes.io/projected/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-kube-api-access-r9zgd\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.705633 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " pod="openstack/ovsdbserver-sb-0" Dec 05 07:04:53 crc kubenswrapper[4780]: I1205 07:04:53.771337 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 07:05:03 crc kubenswrapper[4780]: I1205 07:05:03.454738 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1","Type":"ContainerStarted","Data":"5162f3835cf1ddd3c34f5f74fda2f5f6aa68265a6b2e735fc6b073a09e822ff6"} Dec 05 07:05:12 crc kubenswrapper[4780]: E1205 07:05:12.018300 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a" Dec 05 07:05:12 crc kubenswrapper[4780]: E1205 07:05:12.018925 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mkqsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(621ea4dd-7bc5-4404-9369-1cd99335155d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:05:12 crc kubenswrapper[4780]: E1205 07:05:12.020095 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="621ea4dd-7bc5-4404-9369-1cd99335155d" Dec 05 07:05:12 crc kubenswrapper[4780]: E1205 07:05:12.526148 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="621ea4dd-7bc5-4404-9369-1cd99335155d" Dec 05 07:05:12 crc kubenswrapper[4780]: E1205 07:05:12.653081 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:dad2336390cae6705133deefaa09c9e39512cf29133aa009006e3962c8022108" Dec 05 07:05:12 crc kubenswrapper[4780]: E1205 07:05:12.653321 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:dad2336390cae6705133deefaa09c9e39512cf29133aa009006e3962c8022108,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n556h646h66h587h648h5c8h5dbh57bh5f7h597h5d7h566h57ch5c8h8bh666h6dh67bh648h97h694h5dfh5c6h94h96h667h5d9hbdh5c6hc6hfch656q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krgbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(7c32a219-7b72-4302-8cc4-b9f11a672e8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:05:12 crc kubenswrapper[4780]: E1205 07:05:12.654540 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="7c32a219-7b72-4302-8cc4-b9f11a672e8d" Dec 05 07:05:13 crc kubenswrapper[4780]: E1205 07:05:13.534750 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:dad2336390cae6705133deefaa09c9e39512cf29133aa009006e3962c8022108\\\"\"" pod="openstack/memcached-0" podUID="7c32a219-7b72-4302-8cc4-b9f11a672e8d" Dec 05 07:05:13 crc kubenswrapper[4780]: E1205 07:05:13.668254 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d" Dec 05 07:05:13 crc kubenswrapper[4780]: E1205 07:05:13.668537 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4k6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(1e6efd4f-660c-44e1-bf69-8b1cec6a6e85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:05:13 crc kubenswrapper[4780]: E1205 07:05:13.669716 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" Dec 05 07:05:13 crc kubenswrapper[4780]: E1205 07:05:13.676493 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d" Dec 05 07:05:13 crc kubenswrapper[4780]: E1205 07:05:13.676746 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t4xq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(f5032d09-8298-4941-8b4b-0f24a57b8ced): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:05:13 crc kubenswrapper[4780]: E1205 07:05:13.678641 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="f5032d09-8298-4941-8b4b-0f24a57b8ced" Dec 05 07:05:13 crc kubenswrapper[4780]: E1205 07:05:13.900734 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a" Dec 05 07:05:13 crc kubenswrapper[4780]: E1205 07:05:13.900921 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l45kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(885ecc9e-e70a-4d6e-ab6b-f82e46be61a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:05:13 crc kubenswrapper[4780]: E1205 07:05:13.902310 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" Dec 05 07:05:14 crc kubenswrapper[4780]: E1205 07:05:14.541510 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="f5032d09-8298-4941-8b4b-0f24a57b8ced" Dec 05 07:05:14 crc kubenswrapper[4780]: E1205 07:05:14.541891 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:c2e607504d3c90c201f29ca74f0ee51de1ef2c5444d597d2ff3160ab9b3ce84d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" Dec 05 07:05:14 crc kubenswrapper[4780]: E1205 07:05:14.542746 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:d9a2d8b19d3de4658dd26a2e781d00002e937738bfe6a1d0cf6c68c015085f4a\\\"\"" pod="openstack/openstack-galera-0" podUID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" Dec 05 07:05:17 crc kubenswrapper[4780]: I1205 07:05:17.961754 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fs2vs"] Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.358338 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.358807 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7slp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bc4b48fc9-cnh7x_openstack(b7024867-1485-40d6-8054-06ec596a0585): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.361399 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" podUID="b7024867-1485-40d6-8054-06ec596a0585" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.378258 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.378426 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzgsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-cb666b895-zt6bw_openstack(ae7a1567-97ed-4968-90bb-4dab84011023): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.380504 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-cb666b895-zt6bw" podUID="ae7a1567-97ed-4968-90bb-4dab84011023" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.382109 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.382263 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hvvqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-567c455747-2jt5q_openstack(07896b43-77e1-4aeb-8cde-f5b8dd17a900): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.383506 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-567c455747-2jt5q" podUID="07896b43-77e1-4aeb-8cde-f5b8dd17a900" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.390687 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.390853 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7wx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5cd484bb89-d8ftb_openstack(35bd5941-7ab9-4cda-9a08-0d253cf6e0a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.392110 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" podUID="35bd5941-7ab9-4cda-9a08-0d253cf6e0a8" Dec 05 07:05:18 crc kubenswrapper[4780]: I1205 07:05:18.577015 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fs2vs" event={"ID":"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0","Type":"ContainerStarted","Data":"c56ceaa09feff50252926c6530e388c38cc7259afb3917f4c519acbf12b1c74d"} Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.582434 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792\\\"\"" pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" podUID="b7024867-1485-40d6-8054-06ec596a0585" Dec 05 07:05:18 crc kubenswrapper[4780]: E1205 07:05:18.582510 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792\\\"\"" pod="openstack/dnsmasq-dns-cb666b895-zt6bw" podUID="ae7a1567-97ed-4968-90bb-4dab84011023" Dec 05 07:05:18 crc kubenswrapper[4780]: I1205 07:05:18.866354 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lq2sf"] Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.030003 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.121585 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.155546 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.157024 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:05:19 crc kubenswrapper[4780]: W1205 07:05:19.172158 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa3ab37e_e167_44dd_985c_c8f6b067cfdd.slice/crio-94f34fd57629a6d65f4cd39e9ad6afd09fdbde4efc61eda7e658a5827fd74481 WatchSource:0}: Error finding container 94f34fd57629a6d65f4cd39e9ad6afd09fdbde4efc61eda7e658a5827fd74481: Status 404 returned error can't find the container with id 94f34fd57629a6d65f4cd39e9ad6afd09fdbde4efc61eda7e658a5827fd74481 Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.264425 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-config\") pod \"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8\" (UID: \"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8\") " Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.264552 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-config\") pod \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.264601 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7wx9\" (UniqueName: \"kubernetes.io/projected/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-kube-api-access-p7wx9\") pod \"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8\" (UID: \"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8\") " Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.264635 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-dns-svc\") pod \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.264828 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvvqn\" (UniqueName: \"kubernetes.io/projected/07896b43-77e1-4aeb-8cde-f5b8dd17a900-kube-api-access-hvvqn\") pod \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\" (UID: \"07896b43-77e1-4aeb-8cde-f5b8dd17a900\") " Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.266369 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-config" (OuterVolumeSpecName: "config") pod "07896b43-77e1-4aeb-8cde-f5b8dd17a900" (UID: "07896b43-77e1-4aeb-8cde-f5b8dd17a900"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.266477 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07896b43-77e1-4aeb-8cde-f5b8dd17a900" (UID: "07896b43-77e1-4aeb-8cde-f5b8dd17a900"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.268381 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-config" (OuterVolumeSpecName: "config") pod "35bd5941-7ab9-4cda-9a08-0d253cf6e0a8" (UID: "35bd5941-7ab9-4cda-9a08-0d253cf6e0a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.273534 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07896b43-77e1-4aeb-8cde-f5b8dd17a900-kube-api-access-hvvqn" (OuterVolumeSpecName: "kube-api-access-hvvqn") pod "07896b43-77e1-4aeb-8cde-f5b8dd17a900" (UID: "07896b43-77e1-4aeb-8cde-f5b8dd17a900"). InnerVolumeSpecName "kube-api-access-hvvqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.274392 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-kube-api-access-p7wx9" (OuterVolumeSpecName: "kube-api-access-p7wx9") pod "35bd5941-7ab9-4cda-9a08-0d253cf6e0a8" (UID: "35bd5941-7ab9-4cda-9a08-0d253cf6e0a8"). InnerVolumeSpecName "kube-api-access-p7wx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.366341 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.366377 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.366387 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7wx9\" (UniqueName: \"kubernetes.io/projected/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8-kube-api-access-p7wx9\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.366398 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07896b43-77e1-4aeb-8cde-f5b8dd17a900-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.366408 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvvqn\" (UniqueName: \"kubernetes.io/projected/07896b43-77e1-4aeb-8cde-f5b8dd17a900-kube-api-access-hvvqn\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.585757 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa3ab37e-e167-44dd-985c-c8f6b067cfdd","Type":"ContainerStarted","Data":"94f34fd57629a6d65f4cd39e9ad6afd09fdbde4efc61eda7e658a5827fd74481"} Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.587372 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lq2sf" event={"ID":"52793d91-2b27-4926-9293-78f555401415","Type":"ContainerStarted","Data":"19d8f8b6375855a74dc734369454053df27d780be4f7d6aa48ec7e7253bb23d8"} Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.588546 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c455747-2jt5q" event={"ID":"07896b43-77e1-4aeb-8cde-f5b8dd17a900","Type":"ContainerDied","Data":"28d36b2af85baafbb9d1159ccc07be8a66e9d65413a0aa7444ad5210d03e539b"} Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.588638 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-2jt5q" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.591997 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fee336d1-2c89-4ccb-b6ea-69a4697b7a29","Type":"ContainerStarted","Data":"53f5f8eabb090c5ebef04244cba470e7c1cc6d5514edf4ec50de748007a04ec9"} Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.593115 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" event={"ID":"35bd5941-7ab9-4cda-9a08-0d253cf6e0a8","Type":"ContainerDied","Data":"3fb86f5bf70228b09e01d2f55f43528c2f36943b4dfe7e0a5a9414f6c86c5c3b"} Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.593225 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-d8ftb" Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.644321 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567c455747-2jt5q"] Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.659639 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-567c455747-2jt5q"] Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.672423 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-d8ftb"] Dec 05 07:05:19 crc kubenswrapper[4780]: I1205 07:05:19.684317 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-d8ftb"] Dec 05 07:05:20 crc kubenswrapper[4780]: I1205 07:05:20.151782 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07896b43-77e1-4aeb-8cde-f5b8dd17a900" path="/var/lib/kubelet/pods/07896b43-77e1-4aeb-8cde-f5b8dd17a900/volumes" Dec 05 07:05:20 crc kubenswrapper[4780]: I1205 07:05:20.152530 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35bd5941-7ab9-4cda-9a08-0d253cf6e0a8" path="/var/lib/kubelet/pods/35bd5941-7ab9-4cda-9a08-0d253cf6e0a8/volumes" Dec 05 07:05:20 crc kubenswrapper[4780]: I1205 07:05:20.602041 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1","Type":"ContainerStarted","Data":"5b993d0922d34d413538759bce43f543fde767319d1977a38555ec9962eb3d8c"} Dec 05 07:05:20 crc kubenswrapper[4780]: I1205 07:05:20.602445 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 07:05:20 crc kubenswrapper[4780]: I1205 07:05:20.625132 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.250485689 podStartE2EDuration="34.625108917s" podCreationTimestamp="2025-12-05 07:04:46 +0000 UTC" firstStartedPulling="2025-12-05 07:05:02.945327418 +0000 UTC m=+1137.014843750" lastFinishedPulling="2025-12-05 07:05:20.319950646 +0000 UTC m=+1154.389466978" observedRunningTime="2025-12-05 07:05:20.619335523 +0000 UTC m=+1154.688851875" watchObservedRunningTime="2025-12-05 07:05:20.625108917 +0000 UTC m=+1154.694625249" Dec 05 07:05:22 crc kubenswrapper[4780]: I1205 07:05:22.617695 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fee336d1-2c89-4ccb-b6ea-69a4697b7a29","Type":"ContainerStarted","Data":"ab7e9427aa59f58e7de8e8d8a84a95133b083fbe1830a42a48c9086af76847dd"} Dec 05 07:05:22 crc kubenswrapper[4780]: I1205 07:05:22.623206 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fs2vs" event={"ID":"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0","Type":"ContainerStarted","Data":"fc4ab1bdd9450d2793d03d7cfe2dd694a4092653e1a82515aa096c88a796d8ba"} Dec 05 07:05:22 crc kubenswrapper[4780]: I1205 07:05:22.623268 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-fs2vs" Dec 05 07:05:22 crc kubenswrapper[4780]: I1205 07:05:22.627182 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa3ab37e-e167-44dd-985c-c8f6b067cfdd","Type":"ContainerStarted","Data":"5c9c067c92697e48033b3641b520cfc47f50b10a41d6b3d91152e79157a514bf"} Dec 05 07:05:22 crc kubenswrapper[4780]: I1205 07:05:22.629334 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lq2sf" event={"ID":"52793d91-2b27-4926-9293-78f555401415","Type":"ContainerStarted","Data":"22afb8fe24ad1a2948524fdf54fc7bd5c9280a7319d822d04933116bfabbb09b"} Dec 05 07:05:22 crc kubenswrapper[4780]: I1205 07:05:22.643968 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-fs2vs" podStartSLOduration=30.7095132 podStartE2EDuration="34.643944927s" podCreationTimestamp="2025-12-05 07:04:48 +0000 UTC" firstStartedPulling="2025-12-05 07:05:18.313280264 +0000 UTC m=+1152.382796606" lastFinishedPulling="2025-12-05 07:05:22.247711991 +0000 UTC m=+1156.317228333" observedRunningTime="2025-12-05 07:05:22.638301786 +0000 UTC m=+1156.707818118" watchObservedRunningTime="2025-12-05 07:05:22.643944927 +0000 UTC m=+1156.713461259" Dec 05 07:05:23 crc kubenswrapper[4780]: I1205 07:05:23.639953 4780 generic.go:334] "Generic (PLEG): container finished" podID="52793d91-2b27-4926-9293-78f555401415" containerID="22afb8fe24ad1a2948524fdf54fc7bd5c9280a7319d822d04933116bfabbb09b" exitCode=0 Dec 05 07:05:23 crc kubenswrapper[4780]: I1205 07:05:23.640158 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lq2sf" event={"ID":"52793d91-2b27-4926-9293-78f555401415","Type":"ContainerDied","Data":"22afb8fe24ad1a2948524fdf54fc7bd5c9280a7319d822d04933116bfabbb09b"} Dec 05 07:05:24 crc kubenswrapper[4780]: I1205 07:05:24.649686 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lq2sf" event={"ID":"52793d91-2b27-4926-9293-78f555401415","Type":"ContainerStarted","Data":"7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd"} Dec 05 07:05:25 crc kubenswrapper[4780]: I1205 07:05:25.663224 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fee336d1-2c89-4ccb-b6ea-69a4697b7a29","Type":"ContainerStarted","Data":"eb3b7412e25e8f35afb87b1a111e1efdac1f6846e9ac48835f6a92743cf44e0b"} Dec 05 07:05:25 crc kubenswrapper[4780]: I1205 07:05:25.667850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa3ab37e-e167-44dd-985c-c8f6b067cfdd","Type":"ContainerStarted","Data":"d7f5fd7515ed34f074ee09f78ddd69456ef45c158b4ca80becb54c10be0aea32"} Dec 05 07:05:25 crc kubenswrapper[4780]: I1205 07:05:25.690496 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.534454814 podStartE2EDuration="33.690461022s" podCreationTimestamp="2025-12-05 07:04:52 +0000 UTC" firstStartedPulling="2025-12-05 07:05:19.073537391 +0000 UTC m=+1153.143053723" lastFinishedPulling="2025-12-05 07:05:25.229543599 +0000 UTC m=+1159.299059931" observedRunningTime="2025-12-05 07:05:25.686551566 +0000 UTC m=+1159.756067898" watchObservedRunningTime="2025-12-05 07:05:25.690461022 +0000 UTC m=+1159.759977354" Dec 05 07:05:25 crc kubenswrapper[4780]: I1205 07:05:25.695872 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lq2sf" event={"ID":"52793d91-2b27-4926-9293-78f555401415","Type":"ContainerStarted","Data":"5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458"} Dec 05 07:05:25 crc kubenswrapper[4780]: I1205 07:05:25.696460 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:05:25 crc kubenswrapper[4780]: I1205 07:05:25.696513 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:05:25 crc kubenswrapper[4780]: I1205 07:05:25.715553 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=30.654643129 podStartE2EDuration="36.715520804s" podCreationTimestamp="2025-12-05 07:04:49 +0000 UTC" firstStartedPulling="2025-12-05 07:05:19.182959938 +0000 UTC m=+1153.252476270" lastFinishedPulling="2025-12-05 07:05:25.243837613 +0000 UTC m=+1159.313353945" observedRunningTime="2025-12-05 07:05:25.707677493 +0000 UTC m=+1159.777193825" watchObservedRunningTime="2025-12-05 07:05:25.715520804 +0000 UTC m=+1159.785037136" Dec 05 07:05:25 crc kubenswrapper[4780]: I1205 07:05:25.752965 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 05 07:05:25 crc kubenswrapper[4780]: I1205 07:05:25.761662 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lq2sf" podStartSLOduration=34.604018955 podStartE2EDuration="37.761633751s" podCreationTimestamp="2025-12-05 07:04:48 +0000 UTC" firstStartedPulling="2025-12-05 07:05:19.071862776 +0000 UTC m=+1153.141379108" lastFinishedPulling="2025-12-05 07:05:22.229477572 +0000 UTC m=+1156.298993904" observedRunningTime="2025-12-05 07:05:25.754766668 +0000 UTC m=+1159.824283010" watchObservedRunningTime="2025-12-05 07:05:25.761633751 +0000 UTC m=+1159.831150083" Dec 05 07:05:26 crc kubenswrapper[4780]: I1205 07:05:26.393096 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 07:05:26 crc kubenswrapper[4780]: I1205 07:05:26.705443 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3","Type":"ContainerStarted","Data":"316425e4a0ec5cf2c114631bb0b139a58858607d1ad2c0a8665ea65f6d08f90f"} Dec 05 07:05:26 crc kubenswrapper[4780]: I1205 07:05:26.707001 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7c32a219-7b72-4302-8cc4-b9f11a672e8d","Type":"ContainerStarted","Data":"ba48e227be37eb4042698d7ca1593b87f7775504552ded5d9b72f3ab116ccd77"} Dec 05 07:05:26 crc kubenswrapper[4780]: I1205 07:05:26.708646 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 05 07:05:26 crc kubenswrapper[4780]: I1205 07:05:26.731478 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.95688064 podStartE2EDuration="43.731456433s" podCreationTimestamp="2025-12-05 07:04:43 +0000 UTC" firstStartedPulling="2025-12-05 07:04:45.092218322 +0000 UTC m=+1119.161734654" lastFinishedPulling="2025-12-05 07:05:25.866794115 +0000 UTC m=+1159.936310447" observedRunningTime="2025-12-05 07:05:26.728790362 +0000 UTC m=+1160.798306694" watchObservedRunningTime="2025-12-05 07:05:26.731456433 +0000 UTC m=+1160.800972765" Dec 05 07:05:26 crc kubenswrapper[4780]: I1205 07:05:26.731835 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 05 07:05:26 crc kubenswrapper[4780]: I1205 07:05:26.772058 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 05 07:05:26 crc kubenswrapper[4780]: I1205 07:05:26.779304 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 05 07:05:26 crc kubenswrapper[4780]: I1205 07:05:26.827411 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 05 07:05:27 crc kubenswrapper[4780]: I1205 07:05:27.715634 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85","Type":"ContainerStarted","Data":"edf3bc8a8d63ed0f663b4a238ab5c8946207e1f70506209b7613ac8e39b9757a"} Dec 05 07:05:27 crc kubenswrapper[4780]: I1205 07:05:27.716894 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"621ea4dd-7bc5-4404-9369-1cd99335155d","Type":"ContainerStarted","Data":"3f96e5ef3fbb0acd20f1bcd74508b2612b79eecee120455dd0bc859e46d3b5c7"} Dec 05 07:05:27 crc kubenswrapper[4780]: I1205 07:05:27.717373 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 05 07:05:27 crc kubenswrapper[4780]: I1205 07:05:27.764841 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 05 07:05:27 crc kubenswrapper[4780]: I1205 07:05:27.770008 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.081203 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-cnh7x"] Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.109987 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-z486d"] Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.111297 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.115827 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.223719 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-z486d"] Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.243818 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-dns-svc\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.243866 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-ovsdbserver-nb\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.243969 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-config\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.243995 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gkw2\" (UniqueName: \"kubernetes.io/projected/74eeaa07-5584-4317-af5d-49c4404c281f-kube-api-access-4gkw2\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.247809 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-54wt4"] Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.250227 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.253537 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.269398 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-54wt4"] Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.332963 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.334459 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.339869 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5flpd" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.340286 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.340473 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.340603 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.343014 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.345957 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z68p\" (UniqueName: \"kubernetes.io/projected/2fb4032b-ac6a-46ea-b301-500bf63d3518-kube-api-access-5z68p\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.346061 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fb4032b-ac6a-46ea-b301-500bf63d3518-config\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.346103 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-dns-svc\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.346136 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovs-rundir\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.346167 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-ovsdbserver-nb\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.346222 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovn-rundir\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.346308 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-config\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.346350 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-combined-ca-bundle\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.346379 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.346422 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gkw2\" (UniqueName: \"kubernetes.io/projected/74eeaa07-5584-4317-af5d-49c4404c281f-kube-api-access-4gkw2\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.348496 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-dns-svc\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.349276 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-ovsdbserver-nb\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.350114 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-config\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.352238 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-zt6bw"] Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.397730 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-nwxnx"] Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.399169 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.410337 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.410824 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gkw2\" (UniqueName: \"kubernetes.io/projected/74eeaa07-5584-4317-af5d-49c4404c281f-kube-api-access-4gkw2\") pod \"dnsmasq-dns-6c67bcdbf5-z486d\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.427782 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-nwxnx"] Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.450760 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.450831 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovn-rundir\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.450860 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-config\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.450896 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-sb\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.450923 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-scripts\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.450953 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-nb\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.450972 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451020 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451049 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmn5\" (UniqueName: \"kubernetes.io/projected/ffce971d-fa60-450d-a347-29ba2a9c9c84-kube-api-access-vcmn5\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451078 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5766\" (UniqueName: \"kubernetes.io/projected/2b66162d-b916-4d41-9854-1e09d65622d1-kube-api-access-x5766\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451109 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-combined-ca-bundle\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451131 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451152 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-dns-svc\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451220 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z68p\" (UniqueName: \"kubernetes.io/projected/2fb4032b-ac6a-46ea-b301-500bf63d3518-kube-api-access-5z68p\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451250 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451241 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovn-rundir\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451277 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fb4032b-ac6a-46ea-b301-500bf63d3518-config\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451406 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-config\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451471 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovs-rundir\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.451718 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovs-rundir\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.457236 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.457580 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fb4032b-ac6a-46ea-b301-500bf63d3518-config\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.462603 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-combined-ca-bundle\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.480026 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z68p\" (UniqueName: \"kubernetes.io/projected/2fb4032b-ac6a-46ea-b301-500bf63d3518-kube-api-access-5z68p\") pod \"ovn-controller-metrics-54wt4\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.499517 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.553932 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.554010 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmn5\" (UniqueName: \"kubernetes.io/projected/ffce971d-fa60-450d-a347-29ba2a9c9c84-kube-api-access-vcmn5\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.554077 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5766\" (UniqueName: \"kubernetes.io/projected/2b66162d-b916-4d41-9854-1e09d65622d1-kube-api-access-x5766\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.554117 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-dns-svc\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.555211 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.555267 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-config\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.555325 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.555356 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-config\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.555384 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-sb\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.555409 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-scripts\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.555444 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-nb\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.555468 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.555728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-dns-svc\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.556351 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-config\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.560584 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-config\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.560640 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.560966 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.561472 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-scripts\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.561653 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-sb\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.562183 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-nb\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.562954 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.571832 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.574099 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5766\" (UniqueName: \"kubernetes.io/projected/2b66162d-b916-4d41-9854-1e09d65622d1-kube-api-access-x5766\") pod \"dnsmasq-dns-984c76dd7-nwxnx\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.574129 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmn5\" (UniqueName: \"kubernetes.io/projected/ffce971d-fa60-450d-a347-29ba2a9c9c84-kube-api-access-vcmn5\") pod \"ovn-northd-0\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.590554 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.621771 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.651502 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.731119 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.731100 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc4b48fc9-cnh7x" event={"ID":"b7024867-1485-40d6-8054-06ec596a0585","Type":"ContainerDied","Data":"aeedc0e139310673a9014b30fd0cf10d4a416cb6a10bf23208870165fe3917c1"} Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.748803 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.759886 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7slp\" (UniqueName: \"kubernetes.io/projected/b7024867-1485-40d6-8054-06ec596a0585-kube-api-access-g7slp\") pod \"b7024867-1485-40d6-8054-06ec596a0585\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.759999 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-config\") pod \"b7024867-1485-40d6-8054-06ec596a0585\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.760089 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-dns-svc\") pod \"b7024867-1485-40d6-8054-06ec596a0585\" (UID: \"b7024867-1485-40d6-8054-06ec596a0585\") " Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.760755 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7024867-1485-40d6-8054-06ec596a0585" (UID: "b7024867-1485-40d6-8054-06ec596a0585"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.761468 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-config" (OuterVolumeSpecName: "config") pod "b7024867-1485-40d6-8054-06ec596a0585" (UID: "b7024867-1485-40d6-8054-06ec596a0585"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.788263 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7024867-1485-40d6-8054-06ec596a0585-kube-api-access-g7slp" (OuterVolumeSpecName: "kube-api-access-g7slp") pod "b7024867-1485-40d6-8054-06ec596a0585" (UID: "b7024867-1485-40d6-8054-06ec596a0585"). InnerVolumeSpecName "kube-api-access-g7slp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.862146 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7slp\" (UniqueName: \"kubernetes.io/projected/b7024867-1485-40d6-8054-06ec596a0585-kube-api-access-g7slp\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.863064 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.863080 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7024867-1485-40d6-8054-06ec596a0585-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.896949 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.963452 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-dns-svc\") pod \"ae7a1567-97ed-4968-90bb-4dab84011023\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.963564 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzgsz\" (UniqueName: \"kubernetes.io/projected/ae7a1567-97ed-4968-90bb-4dab84011023-kube-api-access-wzgsz\") pod \"ae7a1567-97ed-4968-90bb-4dab84011023\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.963621 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-config\") pod \"ae7a1567-97ed-4968-90bb-4dab84011023\" (UID: \"ae7a1567-97ed-4968-90bb-4dab84011023\") " Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.964509 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-config" (OuterVolumeSpecName: "config") pod "ae7a1567-97ed-4968-90bb-4dab84011023" (UID: "ae7a1567-97ed-4968-90bb-4dab84011023"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.964978 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae7a1567-97ed-4968-90bb-4dab84011023" (UID: "ae7a1567-97ed-4968-90bb-4dab84011023"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.970742 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae7a1567-97ed-4968-90bb-4dab84011023-kube-api-access-wzgsz" (OuterVolumeSpecName: "kube-api-access-wzgsz") pod "ae7a1567-97ed-4968-90bb-4dab84011023" (UID: "ae7a1567-97ed-4968-90bb-4dab84011023"). InnerVolumeSpecName "kube-api-access-wzgsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:28 crc kubenswrapper[4780]: I1205 07:05:28.978010 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-54wt4"] Dec 05 07:05:28 crc kubenswrapper[4780]: W1205 07:05:28.981818 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fb4032b_ac6a_46ea_b301_500bf63d3518.slice/crio-c6b5bf27169922f945a1202d45f28ca83ef9e4658d2834e18b51e730f4ed0b4e WatchSource:0}: Error finding container c6b5bf27169922f945a1202d45f28ca83ef9e4658d2834e18b51e730f4ed0b4e: Status 404 returned error can't find the container with id c6b5bf27169922f945a1202d45f28ca83ef9e4658d2834e18b51e730f4ed0b4e Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.065932 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzgsz\" (UniqueName: \"kubernetes.io/projected/ae7a1567-97ed-4968-90bb-4dab84011023-kube-api-access-wzgsz\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.065977 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.065988 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae7a1567-97ed-4968-90bb-4dab84011023-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.070946 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-z486d"] Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.107024 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-cnh7x"] Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.115829 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-cnh7x"] Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.308899 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 07:05:29 crc kubenswrapper[4780]: W1205 07:05:29.313432 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffce971d_fa60_450d_a347_29ba2a9c9c84.slice/crio-85c3e6874d1d012841c7662faeef8af21be92898cb0b9d2fff3bbfd0059b2c6c WatchSource:0}: Error finding container 85c3e6874d1d012841c7662faeef8af21be92898cb0b9d2fff3bbfd0059b2c6c: Status 404 returned error can't find the container with id 85c3e6874d1d012841c7662faeef8af21be92898cb0b9d2fff3bbfd0059b2c6c Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.356294 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-nwxnx"] Dec 05 07:05:29 crc kubenswrapper[4780]: W1205 07:05:29.362238 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b66162d_b916_4d41_9854_1e09d65622d1.slice/crio-b267c40bef672b739e66855a2ff75590a4e0454e87e001c8c86107366d86e178 WatchSource:0}: Error finding container b267c40bef672b739e66855a2ff75590a4e0454e87e001c8c86107366d86e178: Status 404 returned error can't find the container with id b267c40bef672b739e66855a2ff75590a4e0454e87e001c8c86107366d86e178 Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.740152 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-54wt4" event={"ID":"2fb4032b-ac6a-46ea-b301-500bf63d3518","Type":"ContainerStarted","Data":"614162aaf287785a7cbbc0f2442a28903aec8fa26e737ecff5c47ce7458a1617"} Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.740496 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-54wt4" event={"ID":"2fb4032b-ac6a-46ea-b301-500bf63d3518","Type":"ContainerStarted","Data":"c6b5bf27169922f945a1202d45f28ca83ef9e4658d2834e18b51e730f4ed0b4e"} Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.742855 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" event={"ID":"74eeaa07-5584-4317-af5d-49c4404c281f","Type":"ContainerStarted","Data":"9e5676ae3aa98802d055ecc498088589c8dc6a8aea7b9ffd41f73057f18eb583"} Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.743950 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffce971d-fa60-450d-a347-29ba2a9c9c84","Type":"ContainerStarted","Data":"85c3e6874d1d012841c7662faeef8af21be92898cb0b9d2fff3bbfd0059b2c6c"} Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.745361 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" event={"ID":"2b66162d-b916-4d41-9854-1e09d65622d1","Type":"ContainerStarted","Data":"b267c40bef672b739e66855a2ff75590a4e0454e87e001c8c86107366d86e178"} Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.746358 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb666b895-zt6bw" event={"ID":"ae7a1567-97ed-4968-90bb-4dab84011023","Type":"ContainerDied","Data":"ac82f69c447e58ce91985279a6778ce04b850a96f3e1b57a428c85a2616d0675"} Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.746453 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-zt6bw" Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.769478 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-54wt4" podStartSLOduration=1.769444939 podStartE2EDuration="1.769444939s" podCreationTimestamp="2025-12-05 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:29.759300937 +0000 UTC m=+1163.828817269" watchObservedRunningTime="2025-12-05 07:05:29.769444939 +0000 UTC m=+1163.838961271" Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.852439 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-zt6bw"] Dec 05 07:05:29 crc kubenswrapper[4780]: I1205 07:05:29.875021 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-zt6bw"] Dec 05 07:05:30 crc kubenswrapper[4780]: I1205 07:05:30.151575 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae7a1567-97ed-4968-90bb-4dab84011023" path="/var/lib/kubelet/pods/ae7a1567-97ed-4968-90bb-4dab84011023/volumes" Dec 05 07:05:30 crc kubenswrapper[4780]: I1205 07:05:30.152232 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7024867-1485-40d6-8054-06ec596a0585" path="/var/lib/kubelet/pods/b7024867-1485-40d6-8054-06ec596a0585/volumes" Dec 05 07:05:30 crc kubenswrapper[4780]: I1205 07:05:30.755173 4780 generic.go:334] "Generic (PLEG): container finished" podID="2b66162d-b916-4d41-9854-1e09d65622d1" containerID="af823dac9b197ec2aa3c1f19be304623a33925ff12e91c1e33ff2e564da58e6f" exitCode=0 Dec 05 07:05:30 crc kubenswrapper[4780]: I1205 07:05:30.755283 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" event={"ID":"2b66162d-b916-4d41-9854-1e09d65622d1","Type":"ContainerDied","Data":"af823dac9b197ec2aa3c1f19be304623a33925ff12e91c1e33ff2e564da58e6f"} Dec 05 07:05:30 crc kubenswrapper[4780]: I1205 07:05:30.758263 4780 generic.go:334] "Generic (PLEG): container finished" podID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" containerID="316425e4a0ec5cf2c114631bb0b139a58858607d1ad2c0a8665ea65f6d08f90f" exitCode=0 Dec 05 07:05:30 crc kubenswrapper[4780]: I1205 07:05:30.758474 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3","Type":"ContainerDied","Data":"316425e4a0ec5cf2c114631bb0b139a58858607d1ad2c0a8665ea65f6d08f90f"} Dec 05 07:05:30 crc kubenswrapper[4780]: I1205 07:05:30.760397 4780 generic.go:334] "Generic (PLEG): container finished" podID="74eeaa07-5584-4317-af5d-49c4404c281f" containerID="45f0bf7b43fcfe7603aceb9ac1e0f37be2abc4383715942362accc667af3868f" exitCode=0 Dec 05 07:05:30 crc kubenswrapper[4780]: I1205 07:05:30.760511 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" event={"ID":"74eeaa07-5584-4317-af5d-49c4404c281f","Type":"ContainerDied","Data":"45f0bf7b43fcfe7603aceb9ac1e0f37be2abc4383715942362accc667af3868f"} Dec 05 07:05:30 crc kubenswrapper[4780]: I1205 07:05:30.764077 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffce971d-fa60-450d-a347-29ba2a9c9c84","Type":"ContainerStarted","Data":"70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294"} Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.773999 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5032d09-8298-4941-8b4b-0f24a57b8ced","Type":"ContainerStarted","Data":"c0c14a6a851b055628b01b03335bcd98a85ff1861eeb6688a554462156bea99f"} Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.776998 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffce971d-fa60-450d-a347-29ba2a9c9c84","Type":"ContainerStarted","Data":"9654c7269b622680dcb56608c38fc0a232f404664727ed94cb9dd7668100f74a"} Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.777603 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.779893 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" event={"ID":"2b66162d-b916-4d41-9854-1e09d65622d1","Type":"ContainerStarted","Data":"ddebfcc93532724895781a47d3f54e4292dff4cba9613a60489fe698d8ad6dae"} Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.780314 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.781993 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3","Type":"ContainerStarted","Data":"ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e"} Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.783679 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" event={"ID":"74eeaa07-5584-4317-af5d-49c4404c281f","Type":"ContainerStarted","Data":"bca648f89b5a2358ad06cd59dac4323cb0570a04f4f986d34c308b68f12a1098"} Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.784134 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.785430 4780 generic.go:334] "Generic (PLEG): container finished" podID="621ea4dd-7bc5-4404-9369-1cd99335155d" containerID="3f96e5ef3fbb0acd20f1bcd74508b2612b79eecee120455dd0bc859e46d3b5c7" exitCode=0 Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.785458 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"621ea4dd-7bc5-4404-9369-1cd99335155d","Type":"ContainerDied","Data":"3f96e5ef3fbb0acd20f1bcd74508b2612b79eecee120455dd0bc859e46d3b5c7"} Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.844900 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.646411008 podStartE2EDuration="3.844862027s" podCreationTimestamp="2025-12-05 07:05:28 +0000 UTC" firstStartedPulling="2025-12-05 07:05:29.315520805 +0000 UTC m=+1163.385037137" lastFinishedPulling="2025-12-05 07:05:30.513971824 +0000 UTC m=+1164.583488156" observedRunningTime="2025-12-05 07:05:31.828825207 +0000 UTC m=+1165.898341549" watchObservedRunningTime="2025-12-05 07:05:31.844862027 +0000 UTC m=+1165.914378369" Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.856672 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.96845598 podStartE2EDuration="50.856650894s" podCreationTimestamp="2025-12-05 07:04:41 +0000 UTC" firstStartedPulling="2025-12-05 07:04:43.667213352 +0000 UTC m=+1117.736729684" lastFinishedPulling="2025-12-05 07:05:25.555408266 +0000 UTC m=+1159.624924598" observedRunningTime="2025-12-05 07:05:31.85206721 +0000 UTC m=+1165.921583552" watchObservedRunningTime="2025-12-05 07:05:31.856650894 +0000 UTC m=+1165.926167246" Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.894039 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" podStartSLOduration=3.372511848 podStartE2EDuration="3.894017047s" podCreationTimestamp="2025-12-05 07:05:28 +0000 UTC" firstStartedPulling="2025-12-05 07:05:29.102065275 +0000 UTC m=+1163.171581597" lastFinishedPulling="2025-12-05 07:05:29.623570464 +0000 UTC m=+1163.693086796" observedRunningTime="2025-12-05 07:05:31.86991547 +0000 UTC m=+1165.939431812" watchObservedRunningTime="2025-12-05 07:05:31.894017047 +0000 UTC m=+1165.963533379" Dec 05 07:05:31 crc kubenswrapper[4780]: I1205 07:05:31.904844 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" podStartSLOduration=3.437286577 podStartE2EDuration="3.904820787s" podCreationTimestamp="2025-12-05 07:05:28 +0000 UTC" firstStartedPulling="2025-12-05 07:05:29.364488839 +0000 UTC m=+1163.434005171" lastFinishedPulling="2025-12-05 07:05:29.832023049 +0000 UTC m=+1163.901539381" observedRunningTime="2025-12-05 07:05:31.89859223 +0000 UTC m=+1165.968108572" watchObservedRunningTime="2025-12-05 07:05:31.904820787 +0000 UTC m=+1165.974337119" Dec 05 07:05:32 crc kubenswrapper[4780]: I1205 07:05:32.795617 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"621ea4dd-7bc5-4404-9369-1cd99335155d","Type":"ContainerStarted","Data":"d68e53a20f7b0772cf31f43fd6387417cf438c45cf97337ca3c20b74894ceb64"} Dec 05 07:05:32 crc kubenswrapper[4780]: I1205 07:05:32.822323 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371986.03247 podStartE2EDuration="50.822305834s" podCreationTimestamp="2025-12-05 07:04:42 +0000 UTC" firstStartedPulling="2025-12-05 07:04:45.094654048 +0000 UTC m=+1119.164170380" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:32.817911756 +0000 UTC m=+1166.887428088" watchObservedRunningTime="2025-12-05 07:05:32.822305834 +0000 UTC m=+1166.891822166" Dec 05 07:05:32 crc kubenswrapper[4780]: I1205 07:05:32.897241 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 05 07:05:32 crc kubenswrapper[4780]: I1205 07:05:32.897292 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 05 07:05:34 crc kubenswrapper[4780]: I1205 07:05:34.179930 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 05 07:05:34 crc kubenswrapper[4780]: I1205 07:05:34.179976 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 05 07:05:34 crc kubenswrapper[4780]: I1205 07:05:34.423742 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.417676 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-z486d"] Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.417958 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" podUID="74eeaa07-5584-4317-af5d-49c4404c281f" containerName="dnsmasq-dns" containerID="cri-o://bca648f89b5a2358ad06cd59dac4323cb0570a04f4f986d34c308b68f12a1098" gracePeriod=10 Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.419106 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.447495 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784d65c867-9pzr6"] Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.455148 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.461953 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784d65c867-9pzr6"] Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.612053 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-config\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.612232 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-sb\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.612337 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-nb\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.612476 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsnlf\" (UniqueName: \"kubernetes.io/projected/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-kube-api-access-jsnlf\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.612600 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-dns-svc\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.714025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-config\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.714093 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-sb\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.714117 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-nb\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.714166 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsnlf\" (UniqueName: \"kubernetes.io/projected/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-kube-api-access-jsnlf\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.714208 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-dns-svc\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.715350 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-dns-svc\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.715439 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-nb\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.715361 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-sb\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.716199 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-config\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.733818 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsnlf\" (UniqueName: \"kubernetes.io/projected/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-kube-api-access-jsnlf\") pod \"dnsmasq-dns-784d65c867-9pzr6\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:36 crc kubenswrapper[4780]: I1205 07:05:36.788693 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.253825 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784d65c867-9pzr6"] Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.559266 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.564976 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.566907 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.566924 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.566908 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.571276 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-f8tg9" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.576024 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.731965 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-cache\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.732034 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcjmj\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-kube-api-access-pcjmj\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.732302 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-lock\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.732364 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.732553 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.834320 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-cache\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.834393 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcjmj\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-kube-api-access-pcjmj\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.834466 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-lock\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.834497 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.834549 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: E1205 07:05:37.834950 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 07:05:37 crc kubenswrapper[4780]: E1205 07:05:37.834977 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 07:05:37 crc kubenswrapper[4780]: E1205 07:05:37.835037 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift podName:3782dca9-a617-47a2-9f89-96ba82200899 nodeName:}" failed. No retries permitted until 2025-12-05 07:05:38.335016736 +0000 UTC m=+1172.404533068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift") pod "swift-storage-0" (UID: "3782dca9-a617-47a2-9f89-96ba82200899") : configmap "swift-ring-files" not found Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.835169 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.835484 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-cache\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.835688 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-lock\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.840806 4780 generic.go:334] "Generic (PLEG): container finished" podID="74eeaa07-5584-4317-af5d-49c4404c281f" containerID="bca648f89b5a2358ad06cd59dac4323cb0570a04f4f986d34c308b68f12a1098" exitCode=0 Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.840909 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" event={"ID":"74eeaa07-5584-4317-af5d-49c4404c281f","Type":"ContainerDied","Data":"bca648f89b5a2358ad06cd59dac4323cb0570a04f4f986d34c308b68f12a1098"} Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.842305 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" event={"ID":"54e3232c-b0fc-4759-b08c-551fbdfc4c5f","Type":"ContainerStarted","Data":"74bf4e1f872c4c69f5db5de3ef62461a0049436e24c106030f65d562a1c9a05f"} Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.872005 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:37 crc kubenswrapper[4780]: I1205 07:05:37.886653 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcjmj\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-kube-api-access-pcjmj\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.152481 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jzmzj"] Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.154087 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.214540 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.217942 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.217984 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.225810 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jzmzj"] Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.241045 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d536c619-112b-48c1-8efe-2e700ead9f8b-etc-swift\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.241124 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njql4\" (UniqueName: \"kubernetes.io/projected/d536c619-112b-48c1-8efe-2e700ead9f8b-kube-api-access-njql4\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.241291 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-dispersionconf\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.241336 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-scripts\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.241455 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-swiftconf\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.241504 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-ring-data-devices\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.241594 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-combined-ca-bundle\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.343427 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-combined-ca-bundle\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.343493 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d536c619-112b-48c1-8efe-2e700ead9f8b-etc-swift\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.343536 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njql4\" (UniqueName: \"kubernetes.io/projected/d536c619-112b-48c1-8efe-2e700ead9f8b-kube-api-access-njql4\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.343575 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-dispersionconf\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.343602 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-scripts\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.343648 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.343675 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-swiftconf\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.343709 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-ring-data-devices\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.344001 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d536c619-112b-48c1-8efe-2e700ead9f8b-etc-swift\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: E1205 07:05:38.344226 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 07:05:38 crc kubenswrapper[4780]: E1205 07:05:38.344249 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 07:05:38 crc kubenswrapper[4780]: E1205 07:05:38.344296 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift podName:3782dca9-a617-47a2-9f89-96ba82200899 nodeName:}" failed. No retries permitted until 2025-12-05 07:05:39.344277278 +0000 UTC m=+1173.413793610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift") pod "swift-storage-0" (UID: "3782dca9-a617-47a2-9f89-96ba82200899") : configmap "swift-ring-files" not found Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.344600 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-scripts\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.345017 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-ring-data-devices\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.348164 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-dispersionconf\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.348809 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-swiftconf\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.349915 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-combined-ca-bundle\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.362507 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njql4\" (UniqueName: \"kubernetes.io/projected/d536c619-112b-48c1-8efe-2e700ead9f8b-kube-api-access-njql4\") pod \"swift-ring-rebalance-jzmzj\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.503358 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" podUID="74eeaa07-5584-4317-af5d-49c4404c281f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.531408 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.750075 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:38 crc kubenswrapper[4780]: I1205 07:05:38.991238 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jzmzj"] Dec 05 07:05:39 crc kubenswrapper[4780]: I1205 07:05:39.363729 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:39 crc kubenswrapper[4780]: E1205 07:05:39.363956 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 07:05:39 crc kubenswrapper[4780]: E1205 07:05:39.364001 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 07:05:39 crc kubenswrapper[4780]: E1205 07:05:39.364052 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift podName:3782dca9-a617-47a2-9f89-96ba82200899 nodeName:}" failed. No retries permitted until 2025-12-05 07:05:41.364036814 +0000 UTC m=+1175.433553146 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift") pod "swift-storage-0" (UID: "3782dca9-a617-47a2-9f89-96ba82200899") : configmap "swift-ring-files" not found Dec 05 07:05:39 crc kubenswrapper[4780]: I1205 07:05:39.862218 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jzmzj" event={"ID":"d536c619-112b-48c1-8efe-2e700ead9f8b","Type":"ContainerStarted","Data":"8a76bfea7a1776561a6e743e84d28c80fcb560ea3464b6ef1db13ae5a737cf9b"} Dec 05 07:05:41 crc kubenswrapper[4780]: I1205 07:05:41.400886 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:41 crc kubenswrapper[4780]: E1205 07:05:41.401099 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 07:05:41 crc kubenswrapper[4780]: E1205 07:05:41.401242 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 07:05:41 crc kubenswrapper[4780]: E1205 07:05:41.401293 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift podName:3782dca9-a617-47a2-9f89-96ba82200899 nodeName:}" failed. No retries permitted until 2025-12-05 07:05:45.401276584 +0000 UTC m=+1179.470792916 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift") pod "swift-storage-0" (UID: "3782dca9-a617-47a2-9f89-96ba82200899") : configmap "swift-ring-files" not found Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.580228 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.637602 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-dns-svc\") pod \"74eeaa07-5584-4317-af5d-49c4404c281f\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.637763 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-ovsdbserver-nb\") pod \"74eeaa07-5584-4317-af5d-49c4404c281f\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.637789 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-config\") pod \"74eeaa07-5584-4317-af5d-49c4404c281f\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.637819 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gkw2\" (UniqueName: \"kubernetes.io/projected/74eeaa07-5584-4317-af5d-49c4404c281f-kube-api-access-4gkw2\") pod \"74eeaa07-5584-4317-af5d-49c4404c281f\" (UID: \"74eeaa07-5584-4317-af5d-49c4404c281f\") " Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.645747 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74eeaa07-5584-4317-af5d-49c4404c281f-kube-api-access-4gkw2" (OuterVolumeSpecName: "kube-api-access-4gkw2") pod "74eeaa07-5584-4317-af5d-49c4404c281f" (UID: "74eeaa07-5584-4317-af5d-49c4404c281f"). InnerVolumeSpecName "kube-api-access-4gkw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.707564 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74eeaa07-5584-4317-af5d-49c4404c281f" (UID: "74eeaa07-5584-4317-af5d-49c4404c281f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.709932 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "74eeaa07-5584-4317-af5d-49c4404c281f" (UID: "74eeaa07-5584-4317-af5d-49c4404c281f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.713571 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-config" (OuterVolumeSpecName: "config") pod "74eeaa07-5584-4317-af5d-49c4404c281f" (UID: "74eeaa07-5584-4317-af5d-49c4404c281f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.740064 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.740101 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.740117 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74eeaa07-5584-4317-af5d-49c4404c281f-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.740130 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gkw2\" (UniqueName: \"kubernetes.io/projected/74eeaa07-5584-4317-af5d-49c4404c281f-kube-api-access-4gkw2\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.745777 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.901531 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" event={"ID":"74eeaa07-5584-4317-af5d-49c4404c281f","Type":"ContainerDied","Data":"9e5676ae3aa98802d055ecc498088589c8dc6a8aea7b9ffd41f73057f18eb583"} Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.901862 4780 scope.go:117] "RemoveContainer" containerID="bca648f89b5a2358ad06cd59dac4323cb0570a04f4f986d34c308b68f12a1098" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.901567 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.907333 4780 generic.go:334] "Generic (PLEG): container finished" podID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerID="4da3382c1cc78ad5ffd7d73cf7eb3aa0c2f34d72198f18e928a8b6c08c890fe2" exitCode=0 Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.907412 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" event={"ID":"54e3232c-b0fc-4759-b08c-551fbdfc4c5f","Type":"ContainerDied","Data":"4da3382c1cc78ad5ffd7d73cf7eb3aa0c2f34d72198f18e928a8b6c08c890fe2"} Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.946291 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-z486d"] Dec 05 07:05:43 crc kubenswrapper[4780]: I1205 07:05:43.953956 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c67bcdbf5-z486d"] Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.047538 4780 scope.go:117] "RemoveContainer" containerID="45f0bf7b43fcfe7603aceb9ac1e0f37be2abc4383715942362accc667af3868f" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.138190 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.163950 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74eeaa07-5584-4317-af5d-49c4404c281f" path="/var/lib/kubelet/pods/74eeaa07-5584-4317-af5d-49c4404c281f/volumes" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.230515 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.467705 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a463-account-create-update-82ctr"] Dec 05 07:05:44 crc kubenswrapper[4780]: E1205 07:05:44.468620 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74eeaa07-5584-4317-af5d-49c4404c281f" containerName="dnsmasq-dns" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.468707 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="74eeaa07-5584-4317-af5d-49c4404c281f" containerName="dnsmasq-dns" Dec 05 07:05:44 crc kubenswrapper[4780]: E1205 07:05:44.468777 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74eeaa07-5584-4317-af5d-49c4404c281f" containerName="init" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.468837 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="74eeaa07-5584-4317-af5d-49c4404c281f" containerName="init" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.469791 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="74eeaa07-5584-4317-af5d-49c4404c281f" containerName="dnsmasq-dns" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.471198 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a463-account-create-update-82ctr" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.478629 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a463-account-create-update-82ctr"] Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.517012 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.571029 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0263e19c-beda-4939-84f0-f5baf54923a5-operator-scripts\") pod \"placement-a463-account-create-update-82ctr\" (UID: \"0263e19c-beda-4939-84f0-f5baf54923a5\") " pod="openstack/placement-a463-account-create-update-82ctr" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.571336 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89l8x\" (UniqueName: \"kubernetes.io/projected/0263e19c-beda-4939-84f0-f5baf54923a5-kube-api-access-89l8x\") pod \"placement-a463-account-create-update-82ctr\" (UID: \"0263e19c-beda-4939-84f0-f5baf54923a5\") " pod="openstack/placement-a463-account-create-update-82ctr" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.676927 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89l8x\" (UniqueName: \"kubernetes.io/projected/0263e19c-beda-4939-84f0-f5baf54923a5-kube-api-access-89l8x\") pod \"placement-a463-account-create-update-82ctr\" (UID: \"0263e19c-beda-4939-84f0-f5baf54923a5\") " pod="openstack/placement-a463-account-create-update-82ctr" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.677007 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0263e19c-beda-4939-84f0-f5baf54923a5-operator-scripts\") pod \"placement-a463-account-create-update-82ctr\" (UID: \"0263e19c-beda-4939-84f0-f5baf54923a5\") " pod="openstack/placement-a463-account-create-update-82ctr" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.677912 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0263e19c-beda-4939-84f0-f5baf54923a5-operator-scripts\") pod \"placement-a463-account-create-update-82ctr\" (UID: \"0263e19c-beda-4939-84f0-f5baf54923a5\") " pod="openstack/placement-a463-account-create-update-82ctr" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.689512 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-s4cxl"] Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.690826 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-s4cxl" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.706637 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89l8x\" (UniqueName: \"kubernetes.io/projected/0263e19c-beda-4939-84f0-f5baf54923a5-kube-api-access-89l8x\") pod \"placement-a463-account-create-update-82ctr\" (UID: \"0263e19c-beda-4939-84f0-f5baf54923a5\") " pod="openstack/placement-a463-account-create-update-82ctr" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.713329 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-s4cxl"] Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.785659 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e28dc679-aa81-426b-b4cb-cc6c25c37791-operator-scripts\") pod \"glance-db-create-s4cxl\" (UID: \"e28dc679-aa81-426b-b4cb-cc6c25c37791\") " pod="openstack/glance-db-create-s4cxl" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.785754 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvtxj\" (UniqueName: \"kubernetes.io/projected/e28dc679-aa81-426b-b4cb-cc6c25c37791-kube-api-access-fvtxj\") pod \"glance-db-create-s4cxl\" (UID: \"e28dc679-aa81-426b-b4cb-cc6c25c37791\") " pod="openstack/glance-db-create-s4cxl" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.785769 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4824-account-create-update-m7ss5"] Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.799126 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4824-account-create-update-m7ss5" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.805166 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.806898 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4824-account-create-update-m7ss5"] Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.875250 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a463-account-create-update-82ctr" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.887501 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvtxj\" (UniqueName: \"kubernetes.io/projected/e28dc679-aa81-426b-b4cb-cc6c25c37791-kube-api-access-fvtxj\") pod \"glance-db-create-s4cxl\" (UID: \"e28dc679-aa81-426b-b4cb-cc6c25c37791\") " pod="openstack/glance-db-create-s4cxl" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.887558 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnl4k\" (UniqueName: \"kubernetes.io/projected/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-kube-api-access-pnl4k\") pod \"glance-4824-account-create-update-m7ss5\" (UID: \"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a\") " pod="openstack/glance-4824-account-create-update-m7ss5" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.887636 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-operator-scripts\") pod \"glance-4824-account-create-update-m7ss5\" (UID: \"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a\") " pod="openstack/glance-4824-account-create-update-m7ss5" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.887664 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e28dc679-aa81-426b-b4cb-cc6c25c37791-operator-scripts\") pod \"glance-db-create-s4cxl\" (UID: \"e28dc679-aa81-426b-b4cb-cc6c25c37791\") " pod="openstack/glance-db-create-s4cxl" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.888311 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e28dc679-aa81-426b-b4cb-cc6c25c37791-operator-scripts\") pod \"glance-db-create-s4cxl\" (UID: \"e28dc679-aa81-426b-b4cb-cc6c25c37791\") " pod="openstack/glance-db-create-s4cxl" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.894322 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.909467 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvtxj\" (UniqueName: \"kubernetes.io/projected/e28dc679-aa81-426b-b4cb-cc6c25c37791-kube-api-access-fvtxj\") pod \"glance-db-create-s4cxl\" (UID: \"e28dc679-aa81-426b-b4cb-cc6c25c37791\") " pod="openstack/glance-db-create-s4cxl" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.990226 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-operator-scripts\") pod \"glance-4824-account-create-update-m7ss5\" (UID: \"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a\") " pod="openstack/glance-4824-account-create-update-m7ss5" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.990409 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnl4k\" (UniqueName: \"kubernetes.io/projected/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-kube-api-access-pnl4k\") pod \"glance-4824-account-create-update-m7ss5\" (UID: \"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a\") " pod="openstack/glance-4824-account-create-update-m7ss5" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.991622 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-operator-scripts\") pod \"glance-4824-account-create-update-m7ss5\" (UID: \"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a\") " pod="openstack/glance-4824-account-create-update-m7ss5" Dec 05 07:05:44 crc kubenswrapper[4780]: I1205 07:05:44.994473 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 05 07:05:45 crc kubenswrapper[4780]: I1205 07:05:45.009821 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnl4k\" (UniqueName: \"kubernetes.io/projected/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-kube-api-access-pnl4k\") pod \"glance-4824-account-create-update-m7ss5\" (UID: \"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a\") " pod="openstack/glance-4824-account-create-update-m7ss5" Dec 05 07:05:45 crc kubenswrapper[4780]: I1205 07:05:45.067801 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-s4cxl" Dec 05 07:05:45 crc kubenswrapper[4780]: I1205 07:05:45.140118 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4824-account-create-update-m7ss5" Dec 05 07:05:45 crc kubenswrapper[4780]: I1205 07:05:45.498351 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:45 crc kubenswrapper[4780]: E1205 07:05:45.498494 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 07:05:45 crc kubenswrapper[4780]: E1205 07:05:45.498514 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 07:05:45 crc kubenswrapper[4780]: E1205 07:05:45.498569 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift podName:3782dca9-a617-47a2-9f89-96ba82200899 nodeName:}" failed. No retries permitted until 2025-12-05 07:05:53.498552452 +0000 UTC m=+1187.568068794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift") pod "swift-storage-0" (UID: "3782dca9-a617-47a2-9f89-96ba82200899") : configmap "swift-ring-files" not found Dec 05 07:05:45 crc kubenswrapper[4780]: I1205 07:05:45.936541 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" event={"ID":"54e3232c-b0fc-4759-b08c-551fbdfc4c5f","Type":"ContainerStarted","Data":"d4e673a5d0d5b95b4e087e873639a0e910f7ecaf75ab1e36768194122864027c"} Dec 05 07:05:45 crc kubenswrapper[4780]: I1205 07:05:45.936715 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:45 crc kubenswrapper[4780]: I1205 07:05:45.957143 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" podStartSLOduration=9.957123446 podStartE2EDuration="9.957123446s" podCreationTimestamp="2025-12-05 07:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:45.952778359 +0000 UTC m=+1180.022294691" watchObservedRunningTime="2025-12-05 07:05:45.957123446 +0000 UTC m=+1180.026639778" Dec 05 07:05:46 crc kubenswrapper[4780]: I1205 07:05:46.961649 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jzmzj" event={"ID":"d536c619-112b-48c1-8efe-2e700ead9f8b","Type":"ContainerStarted","Data":"3742f6a072d8632b7e8ed892a5b7bcdb214d48d59cb741bc0ca3f06d7123512e"} Dec 05 07:05:46 crc kubenswrapper[4780]: I1205 07:05:46.979179 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jzmzj" podStartSLOduration=1.391560327 podStartE2EDuration="8.979163474s" podCreationTimestamp="2025-12-05 07:05:38 +0000 UTC" firstStartedPulling="2025-12-05 07:05:38.999564827 +0000 UTC m=+1173.069081149" lastFinishedPulling="2025-12-05 07:05:46.587167964 +0000 UTC m=+1180.656684296" observedRunningTime="2025-12-05 07:05:46.977182691 +0000 UTC m=+1181.046699023" watchObservedRunningTime="2025-12-05 07:05:46.979163474 +0000 UTC m=+1181.048679806" Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.049511 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-s4cxl"] Dec 05 07:05:47 crc kubenswrapper[4780]: W1205 07:05:47.055002 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode28dc679_aa81_426b_b4cb_cc6c25c37791.slice/crio-0526ec19b078e8b5cf029112cd307f221e87371984cc4f505ecbf2050bc86b4c WatchSource:0}: Error finding container 0526ec19b078e8b5cf029112cd307f221e87371984cc4f505ecbf2050bc86b4c: Status 404 returned error can't find the container with id 0526ec19b078e8b5cf029112cd307f221e87371984cc4f505ecbf2050bc86b4c Dec 05 07:05:47 crc kubenswrapper[4780]: W1205 07:05:47.056682 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0263e19c_beda_4939_84f0_f5baf54923a5.slice/crio-52a0d4819036f3f675bc325b36ef8a793756ed5e28376d86cc254bd556c924b5 WatchSource:0}: Error finding container 52a0d4819036f3f675bc325b36ef8a793756ed5e28376d86cc254bd556c924b5: Status 404 returned error can't find the container with id 52a0d4819036f3f675bc325b36ef8a793756ed5e28376d86cc254bd556c924b5 Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.058200 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a463-account-create-update-82ctr"] Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.178089 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4824-account-create-update-m7ss5"] Dec 05 07:05:47 crc kubenswrapper[4780]: W1205 07:05:47.179769 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6ae3ef0_0a62_4f04_aa97_1f167e0c5f3a.slice/crio-2cd4055113f327d13a03d0353ad47a4ab9703a8c552aff05e932e188f89b0775 WatchSource:0}: Error finding container 2cd4055113f327d13a03d0353ad47a4ab9703a8c552aff05e932e188f89b0775: Status 404 returned error can't find the container with id 2cd4055113f327d13a03d0353ad47a4ab9703a8c552aff05e932e188f89b0775 Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.971167 4780 generic.go:334] "Generic (PLEG): container finished" podID="c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a" containerID="c9a9b3910aba9f6425456a7bf0367e487b88680f7d5707056f6246aa6385429a" exitCode=0 Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.971251 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4824-account-create-update-m7ss5" event={"ID":"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a","Type":"ContainerDied","Data":"c9a9b3910aba9f6425456a7bf0367e487b88680f7d5707056f6246aa6385429a"} Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.972159 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4824-account-create-update-m7ss5" event={"ID":"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a","Type":"ContainerStarted","Data":"2cd4055113f327d13a03d0353ad47a4ab9703a8c552aff05e932e188f89b0775"} Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.973388 4780 generic.go:334] "Generic (PLEG): container finished" podID="0263e19c-beda-4939-84f0-f5baf54923a5" containerID="6c8199cb067935af9fd1559a6a1928e847d6217a972643137a70db93d03e5e5e" exitCode=0 Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.973454 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a463-account-create-update-82ctr" event={"ID":"0263e19c-beda-4939-84f0-f5baf54923a5","Type":"ContainerDied","Data":"6c8199cb067935af9fd1559a6a1928e847d6217a972643137a70db93d03e5e5e"} Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.973478 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a463-account-create-update-82ctr" event={"ID":"0263e19c-beda-4939-84f0-f5baf54923a5","Type":"ContainerStarted","Data":"52a0d4819036f3f675bc325b36ef8a793756ed5e28376d86cc254bd556c924b5"} Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.978086 4780 generic.go:334] "Generic (PLEG): container finished" podID="e28dc679-aa81-426b-b4cb-cc6c25c37791" containerID="d19ad9ccad8be34e9adcc6e46df9307e0138b783fd220f548d39c05464deaafc" exitCode=0 Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.978779 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-s4cxl" event={"ID":"e28dc679-aa81-426b-b4cb-cc6c25c37791","Type":"ContainerDied","Data":"d19ad9ccad8be34e9adcc6e46df9307e0138b783fd220f548d39c05464deaafc"} Dec 05 07:05:47 crc kubenswrapper[4780]: I1205 07:05:47.978806 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-s4cxl" event={"ID":"e28dc679-aa81-426b-b4cb-cc6c25c37791","Type":"ContainerStarted","Data":"0526ec19b078e8b5cf029112cd307f221e87371984cc4f505ecbf2050bc86b4c"} Dec 05 07:05:48 crc kubenswrapper[4780]: I1205 07:05:48.504656 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c67bcdbf5-z486d" podUID="74eeaa07-5584-4317-af5d-49c4404c281f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.501764 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a463-account-create-update-82ctr" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.587724 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-s4cxl" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.594339 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89l8x\" (UniqueName: \"kubernetes.io/projected/0263e19c-beda-4939-84f0-f5baf54923a5-kube-api-access-89l8x\") pod \"0263e19c-beda-4939-84f0-f5baf54923a5\" (UID: \"0263e19c-beda-4939-84f0-f5baf54923a5\") " Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.594396 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0263e19c-beda-4939-84f0-f5baf54923a5-operator-scripts\") pod \"0263e19c-beda-4939-84f0-f5baf54923a5\" (UID: \"0263e19c-beda-4939-84f0-f5baf54923a5\") " Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.594480 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e28dc679-aa81-426b-b4cb-cc6c25c37791-operator-scripts\") pod \"e28dc679-aa81-426b-b4cb-cc6c25c37791\" (UID: \"e28dc679-aa81-426b-b4cb-cc6c25c37791\") " Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.594537 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvtxj\" (UniqueName: \"kubernetes.io/projected/e28dc679-aa81-426b-b4cb-cc6c25c37791-kube-api-access-fvtxj\") pod \"e28dc679-aa81-426b-b4cb-cc6c25c37791\" (UID: \"e28dc679-aa81-426b-b4cb-cc6c25c37791\") " Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.595381 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28dc679-aa81-426b-b4cb-cc6c25c37791-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e28dc679-aa81-426b-b4cb-cc6c25c37791" (UID: "e28dc679-aa81-426b-b4cb-cc6c25c37791"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.595631 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0263e19c-beda-4939-84f0-f5baf54923a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0263e19c-beda-4939-84f0-f5baf54923a5" (UID: "0263e19c-beda-4939-84f0-f5baf54923a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.596072 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4824-account-create-update-m7ss5" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.601674 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28dc679-aa81-426b-b4cb-cc6c25c37791-kube-api-access-fvtxj" (OuterVolumeSpecName: "kube-api-access-fvtxj") pod "e28dc679-aa81-426b-b4cb-cc6c25c37791" (UID: "e28dc679-aa81-426b-b4cb-cc6c25c37791"). InnerVolumeSpecName "kube-api-access-fvtxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.610527 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0263e19c-beda-4939-84f0-f5baf54923a5-kube-api-access-89l8x" (OuterVolumeSpecName: "kube-api-access-89l8x") pod "0263e19c-beda-4939-84f0-f5baf54923a5" (UID: "0263e19c-beda-4939-84f0-f5baf54923a5"). InnerVolumeSpecName "kube-api-access-89l8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.696396 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-operator-scripts\") pod \"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a\" (UID: \"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a\") " Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.697160 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnl4k\" (UniqueName: \"kubernetes.io/projected/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-kube-api-access-pnl4k\") pod \"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a\" (UID: \"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a\") " Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.696990 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a" (UID: "c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.697591 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.697620 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89l8x\" (UniqueName: \"kubernetes.io/projected/0263e19c-beda-4939-84f0-f5baf54923a5-kube-api-access-89l8x\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.697635 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0263e19c-beda-4939-84f0-f5baf54923a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.697648 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e28dc679-aa81-426b-b4cb-cc6c25c37791-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.697660 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvtxj\" (UniqueName: \"kubernetes.io/projected/e28dc679-aa81-426b-b4cb-cc6c25c37791-kube-api-access-fvtxj\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.701483 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-kube-api-access-pnl4k" (OuterVolumeSpecName: "kube-api-access-pnl4k") pod "c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a" (UID: "c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a"). InnerVolumeSpecName "kube-api-access-pnl4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.798511 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnl4k\" (UniqueName: \"kubernetes.io/projected/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a-kube-api-access-pnl4k\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.992462 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4824-account-create-update-m7ss5" event={"ID":"c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a","Type":"ContainerDied","Data":"2cd4055113f327d13a03d0353ad47a4ab9703a8c552aff05e932e188f89b0775"} Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.992512 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd4055113f327d13a03d0353ad47a4ab9703a8c552aff05e932e188f89b0775" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.992629 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4824-account-create-update-m7ss5" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.993874 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a463-account-create-update-82ctr" event={"ID":"0263e19c-beda-4939-84f0-f5baf54923a5","Type":"ContainerDied","Data":"52a0d4819036f3f675bc325b36ef8a793756ed5e28376d86cc254bd556c924b5"} Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.993916 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a0d4819036f3f675bc325b36ef8a793756ed5e28376d86cc254bd556c924b5" Dec 05 07:05:49 crc kubenswrapper[4780]: I1205 07:05:49.993971 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a463-account-create-update-82ctr" Dec 05 07:05:50 crc kubenswrapper[4780]: I1205 07:05:50.004272 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-s4cxl" event={"ID":"e28dc679-aa81-426b-b4cb-cc6c25c37791","Type":"ContainerDied","Data":"0526ec19b078e8b5cf029112cd307f221e87371984cc4f505ecbf2050bc86b4c"} Dec 05 07:05:50 crc kubenswrapper[4780]: I1205 07:05:50.004310 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0526ec19b078e8b5cf029112cd307f221e87371984cc4f505ecbf2050bc86b4c" Dec 05 07:05:50 crc kubenswrapper[4780]: I1205 07:05:50.004361 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-s4cxl" Dec 05 07:05:51 crc kubenswrapper[4780]: I1205 07:05:51.791089 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:05:51 crc kubenswrapper[4780]: I1205 07:05:51.867998 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-nwxnx"] Dec 05 07:05:51 crc kubenswrapper[4780]: I1205 07:05:51.868433 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" podUID="2b66162d-b916-4d41-9854-1e09d65622d1" containerName="dnsmasq-dns" containerID="cri-o://ddebfcc93532724895781a47d3f54e4292dff4cba9613a60489fe698d8ad6dae" gracePeriod=10 Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.019569 4780 generic.go:334] "Generic (PLEG): container finished" podID="2b66162d-b916-4d41-9854-1e09d65622d1" containerID="ddebfcc93532724895781a47d3f54e4292dff4cba9613a60489fe698d8ad6dae" exitCode=0 Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.019642 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" event={"ID":"2b66162d-b916-4d41-9854-1e09d65622d1","Type":"ContainerDied","Data":"ddebfcc93532724895781a47d3f54e4292dff4cba9613a60489fe698d8ad6dae"} Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.331579 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.345265 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-config\") pod \"2b66162d-b916-4d41-9854-1e09d65622d1\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.345320 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-nb\") pod \"2b66162d-b916-4d41-9854-1e09d65622d1\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.345355 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5766\" (UniqueName: \"kubernetes.io/projected/2b66162d-b916-4d41-9854-1e09d65622d1-kube-api-access-x5766\") pod \"2b66162d-b916-4d41-9854-1e09d65622d1\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.345402 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-sb\") pod \"2b66162d-b916-4d41-9854-1e09d65622d1\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.345493 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-dns-svc\") pod \"2b66162d-b916-4d41-9854-1e09d65622d1\" (UID: \"2b66162d-b916-4d41-9854-1e09d65622d1\") " Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.354110 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b66162d-b916-4d41-9854-1e09d65622d1-kube-api-access-x5766" (OuterVolumeSpecName: "kube-api-access-x5766") pod "2b66162d-b916-4d41-9854-1e09d65622d1" (UID: "2b66162d-b916-4d41-9854-1e09d65622d1"). InnerVolumeSpecName "kube-api-access-x5766". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.406479 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-config" (OuterVolumeSpecName: "config") pod "2b66162d-b916-4d41-9854-1e09d65622d1" (UID: "2b66162d-b916-4d41-9854-1e09d65622d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.422960 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b66162d-b916-4d41-9854-1e09d65622d1" (UID: "2b66162d-b916-4d41-9854-1e09d65622d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.425127 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b66162d-b916-4d41-9854-1e09d65622d1" (UID: "2b66162d-b916-4d41-9854-1e09d65622d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.441750 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b66162d-b916-4d41-9854-1e09d65622d1" (UID: "2b66162d-b916-4d41-9854-1e09d65622d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.448219 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.448250 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.448264 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5766\" (UniqueName: \"kubernetes.io/projected/2b66162d-b916-4d41-9854-1e09d65622d1-kube-api-access-x5766\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.448273 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:52 crc kubenswrapper[4780]: I1205 07:05:52.448282 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b66162d-b916-4d41-9854-1e09d65622d1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.030003 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" event={"ID":"2b66162d-b916-4d41-9854-1e09d65622d1","Type":"ContainerDied","Data":"b267c40bef672b739e66855a2ff75590a4e0454e87e001c8c86107366d86e178"} Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.030061 4780 scope.go:117] "RemoveContainer" containerID="ddebfcc93532724895781a47d3f54e4292dff4cba9613a60489fe698d8ad6dae" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.030202 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-984c76dd7-nwxnx" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.066841 4780 scope.go:117] "RemoveContainer" containerID="af823dac9b197ec2aa3c1f19be304623a33925ff12e91c1e33ff2e564da58e6f" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.068075 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-nwxnx"] Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.078448 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-984c76dd7-nwxnx"] Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.564345 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:05:53 crc kubenswrapper[4780]: E1205 07:05:53.564530 4780 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 07:05:53 crc kubenswrapper[4780]: E1205 07:05:53.564561 4780 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 07:05:53 crc kubenswrapper[4780]: E1205 07:05:53.564627 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift podName:3782dca9-a617-47a2-9f89-96ba82200899 nodeName:}" failed. No retries permitted until 2025-12-05 07:06:09.564610431 +0000 UTC m=+1203.634126763 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift") pod "swift-storage-0" (UID: "3782dca9-a617-47a2-9f89-96ba82200899") : configmap "swift-ring-files" not found Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.864853 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fs2vs" podUID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" containerName="ovn-controller" probeResult="failure" output=< Dec 05 07:05:53 crc kubenswrapper[4780]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 05 07:05:53 crc kubenswrapper[4780]: > Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.933278 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ccmqk"] Dec 05 07:05:53 crc kubenswrapper[4780]: E1205 07:05:53.933870 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28dc679-aa81-426b-b4cb-cc6c25c37791" containerName="mariadb-database-create" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.933917 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28dc679-aa81-426b-b4cb-cc6c25c37791" containerName="mariadb-database-create" Dec 05 07:05:53 crc kubenswrapper[4780]: E1205 07:05:53.933951 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0263e19c-beda-4939-84f0-f5baf54923a5" containerName="mariadb-account-create-update" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.933960 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0263e19c-beda-4939-84f0-f5baf54923a5" containerName="mariadb-account-create-update" Dec 05 07:05:53 crc kubenswrapper[4780]: E1205 07:05:53.933973 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b66162d-b916-4d41-9854-1e09d65622d1" containerName="init" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.933981 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b66162d-b916-4d41-9854-1e09d65622d1" containerName="init" Dec 05 07:05:53 crc kubenswrapper[4780]: E1205 07:05:53.934000 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b66162d-b916-4d41-9854-1e09d65622d1" containerName="dnsmasq-dns" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.934008 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b66162d-b916-4d41-9854-1e09d65622d1" containerName="dnsmasq-dns" Dec 05 07:05:53 crc kubenswrapper[4780]: E1205 07:05:53.934023 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a" containerName="mariadb-account-create-update" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.934030 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a" containerName="mariadb-account-create-update" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.934397 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a" containerName="mariadb-account-create-update" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.934418 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b66162d-b916-4d41-9854-1e09d65622d1" containerName="dnsmasq-dns" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.934429 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0263e19c-beda-4939-84f0-f5baf54923a5" containerName="mariadb-account-create-update" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.934435 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28dc679-aa81-426b-b4cb-cc6c25c37791" containerName="mariadb-database-create" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.935809 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ccmqk" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.939159 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.942234 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ccmqk"] Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.971085 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snzg7\" (UniqueName: \"kubernetes.io/projected/d11a6aab-3b40-43bd-bdd6-3fc630277d49-kube-api-access-snzg7\") pod \"keystone-db-create-ccmqk\" (UID: \"d11a6aab-3b40-43bd-bdd6-3fc630277d49\") " pod="openstack/keystone-db-create-ccmqk" Dec 05 07:05:53 crc kubenswrapper[4780]: I1205 07:05:53.971220 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11a6aab-3b40-43bd-bdd6-3fc630277d49-operator-scripts\") pod \"keystone-db-create-ccmqk\" (UID: \"d11a6aab-3b40-43bd-bdd6-3fc630277d49\") " pod="openstack/keystone-db-create-ccmqk" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.023231 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e7eb-account-create-update-wjj2g"] Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.024488 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7eb-account-create-update-wjj2g" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.026776 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.036334 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e7eb-account-create-update-wjj2g"] Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.072412 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11a6aab-3b40-43bd-bdd6-3fc630277d49-operator-scripts\") pod \"keystone-db-create-ccmqk\" (UID: \"d11a6aab-3b40-43bd-bdd6-3fc630277d49\") " pod="openstack/keystone-db-create-ccmqk" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.072496 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snzg7\" (UniqueName: \"kubernetes.io/projected/d11a6aab-3b40-43bd-bdd6-3fc630277d49-kube-api-access-snzg7\") pod \"keystone-db-create-ccmqk\" (UID: \"d11a6aab-3b40-43bd-bdd6-3fc630277d49\") " pod="openstack/keystone-db-create-ccmqk" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.073526 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11a6aab-3b40-43bd-bdd6-3fc630277d49-operator-scripts\") pod \"keystone-db-create-ccmqk\" (UID: \"d11a6aab-3b40-43bd-bdd6-3fc630277d49\") " pod="openstack/keystone-db-create-ccmqk" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.092492 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snzg7\" (UniqueName: \"kubernetes.io/projected/d11a6aab-3b40-43bd-bdd6-3fc630277d49-kube-api-access-snzg7\") pod \"keystone-db-create-ccmqk\" (UID: \"d11a6aab-3b40-43bd-bdd6-3fc630277d49\") " pod="openstack/keystone-db-create-ccmqk" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.148435 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b66162d-b916-4d41-9854-1e09d65622d1" path="/var/lib/kubelet/pods/2b66162d-b916-4d41-9854-1e09d65622d1/volumes" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.173737 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkp5v\" (UniqueName: \"kubernetes.io/projected/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-kube-api-access-pkp5v\") pod \"keystone-e7eb-account-create-update-wjj2g\" (UID: \"42c4e4b4-a803-47f4-99eb-fb15b65b82b2\") " pod="openstack/keystone-e7eb-account-create-update-wjj2g" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.174453 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-operator-scripts\") pod \"keystone-e7eb-account-create-update-wjj2g\" (UID: \"42c4e4b4-a803-47f4-99eb-fb15b65b82b2\") " pod="openstack/keystone-e7eb-account-create-update-wjj2g" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.256619 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ccmqk" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.276990 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-operator-scripts\") pod \"keystone-e7eb-account-create-update-wjj2g\" (UID: \"42c4e4b4-a803-47f4-99eb-fb15b65b82b2\") " pod="openstack/keystone-e7eb-account-create-update-wjj2g" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.277093 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkp5v\" (UniqueName: \"kubernetes.io/projected/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-kube-api-access-pkp5v\") pod \"keystone-e7eb-account-create-update-wjj2g\" (UID: \"42c4e4b4-a803-47f4-99eb-fb15b65b82b2\") " pod="openstack/keystone-e7eb-account-create-update-wjj2g" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.278255 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-operator-scripts\") pod \"keystone-e7eb-account-create-update-wjj2g\" (UID: \"42c4e4b4-a803-47f4-99eb-fb15b65b82b2\") " pod="openstack/keystone-e7eb-account-create-update-wjj2g" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.297065 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkp5v\" (UniqueName: \"kubernetes.io/projected/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-kube-api-access-pkp5v\") pod \"keystone-e7eb-account-create-update-wjj2g\" (UID: \"42c4e4b4-a803-47f4-99eb-fb15b65b82b2\") " pod="openstack/keystone-e7eb-account-create-update-wjj2g" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.309114 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-lmg7j"] Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.310138 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lmg7j" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.326967 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lmg7j"] Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.340352 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7eb-account-create-update-wjj2g" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.479990 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8kcp\" (UniqueName: \"kubernetes.io/projected/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-kube-api-access-z8kcp\") pod \"placement-db-create-lmg7j\" (UID: \"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069\") " pod="openstack/placement-db-create-lmg7j" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.480055 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-operator-scripts\") pod \"placement-db-create-lmg7j\" (UID: \"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069\") " pod="openstack/placement-db-create-lmg7j" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.581499 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8kcp\" (UniqueName: \"kubernetes.io/projected/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-kube-api-access-z8kcp\") pod \"placement-db-create-lmg7j\" (UID: \"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069\") " pod="openstack/placement-db-create-lmg7j" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.581588 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-operator-scripts\") pod \"placement-db-create-lmg7j\" (UID: \"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069\") " pod="openstack/placement-db-create-lmg7j" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.583062 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-operator-scripts\") pod \"placement-db-create-lmg7j\" (UID: \"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069\") " pod="openstack/placement-db-create-lmg7j" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.606780 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8kcp\" (UniqueName: \"kubernetes.io/projected/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-kube-api-access-z8kcp\") pod \"placement-db-create-lmg7j\" (UID: \"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069\") " pod="openstack/placement-db-create-lmg7j" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.663004 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lmg7j" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.727664 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ccmqk"] Dec 05 07:05:54 crc kubenswrapper[4780]: W1205 07:05:54.728963 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd11a6aab_3b40_43bd_bdd6_3fc630277d49.slice/crio-bc43cca0b5bb45efacad54446d84b50f09ab5db95950f762e7682628105e11b3 WatchSource:0}: Error finding container bc43cca0b5bb45efacad54446d84b50f09ab5db95950f762e7682628105e11b3: Status 404 returned error can't find the container with id bc43cca0b5bb45efacad54446d84b50f09ab5db95950f762e7682628105e11b3 Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.902435 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e7eb-account-create-update-wjj2g"] Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.910964 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bsjtr"] Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.912254 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.915223 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.915428 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gkns2" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.921174 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bsjtr"] Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.997047 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-combined-ca-bundle\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.997112 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-config-data\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.997138 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnb2\" (UniqueName: \"kubernetes.io/projected/c79e9679-696f-498c-a1c0-d2d465c637fd-kube-api-access-8rnb2\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:54 crc kubenswrapper[4780]: I1205 07:05:54.997176 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-db-sync-config-data\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.057111 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ccmqk" event={"ID":"d11a6aab-3b40-43bd-bdd6-3fc630277d49","Type":"ContainerStarted","Data":"f5a19d596fbaf2b4a17da0bcb1a9fbc8f4f9e6d4fb4526f80df2d3280f9de5d0"} Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.057168 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ccmqk" event={"ID":"d11a6aab-3b40-43bd-bdd6-3fc630277d49","Type":"ContainerStarted","Data":"bc43cca0b5bb45efacad54446d84b50f09ab5db95950f762e7682628105e11b3"} Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.062797 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e7eb-account-create-update-wjj2g" event={"ID":"42c4e4b4-a803-47f4-99eb-fb15b65b82b2","Type":"ContainerStarted","Data":"15f6ee79f3629bbe75c060a327dfdb6879f7c38ec05bc92c0139c4a7a8828fc9"} Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.077979 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-ccmqk" podStartSLOduration=2.077946106 podStartE2EDuration="2.077946106s" podCreationTimestamp="2025-12-05 07:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:05:55.072169009 +0000 UTC m=+1189.141685351" watchObservedRunningTime="2025-12-05 07:05:55.077946106 +0000 UTC m=+1189.147462448" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.098542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-config-data\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.098591 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnb2\" (UniqueName: \"kubernetes.io/projected/c79e9679-696f-498c-a1c0-d2d465c637fd-kube-api-access-8rnb2\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.098643 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-db-sync-config-data\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.098746 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-combined-ca-bundle\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.107680 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-combined-ca-bundle\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.107825 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-config-data\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.116249 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-db-sync-config-data\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.119704 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnb2\" (UniqueName: \"kubernetes.io/projected/c79e9679-696f-498c-a1c0-d2d465c637fd-kube-api-access-8rnb2\") pod \"glance-db-sync-bsjtr\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.157088 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lmg7j"] Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.250973 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bsjtr" Dec 05 07:05:55 crc kubenswrapper[4780]: I1205 07:05:55.798634 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bsjtr"] Dec 05 07:05:56 crc kubenswrapper[4780]: I1205 07:05:56.070171 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bsjtr" event={"ID":"c79e9679-696f-498c-a1c0-d2d465c637fd","Type":"ContainerStarted","Data":"0aca8841f37cf8659034d2af428b74daa745625212d5b4e31ef47c67fb08ea29"} Dec 05 07:05:56 crc kubenswrapper[4780]: I1205 07:05:56.071986 4780 generic.go:334] "Generic (PLEG): container finished" podID="d536c619-112b-48c1-8efe-2e700ead9f8b" containerID="3742f6a072d8632b7e8ed892a5b7bcdb214d48d59cb741bc0ca3f06d7123512e" exitCode=0 Dec 05 07:05:56 crc kubenswrapper[4780]: I1205 07:05:56.072041 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jzmzj" event={"ID":"d536c619-112b-48c1-8efe-2e700ead9f8b","Type":"ContainerDied","Data":"3742f6a072d8632b7e8ed892a5b7bcdb214d48d59cb741bc0ca3f06d7123512e"} Dec 05 07:05:56 crc kubenswrapper[4780]: I1205 07:05:56.073791 4780 generic.go:334] "Generic (PLEG): container finished" podID="d11a6aab-3b40-43bd-bdd6-3fc630277d49" containerID="f5a19d596fbaf2b4a17da0bcb1a9fbc8f4f9e6d4fb4526f80df2d3280f9de5d0" exitCode=0 Dec 05 07:05:56 crc kubenswrapper[4780]: I1205 07:05:56.073866 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ccmqk" event={"ID":"d11a6aab-3b40-43bd-bdd6-3fc630277d49","Type":"ContainerDied","Data":"f5a19d596fbaf2b4a17da0bcb1a9fbc8f4f9e6d4fb4526f80df2d3280f9de5d0"} Dec 05 07:05:56 crc kubenswrapper[4780]: I1205 07:05:56.075455 4780 generic.go:334] "Generic (PLEG): container finished" podID="42c4e4b4-a803-47f4-99eb-fb15b65b82b2" containerID="2588118c8f724767c81041c6d0223d7bf9ca8adeb4a04b8f033b9b3ce59c1085" exitCode=0 Dec 05 07:05:56 crc kubenswrapper[4780]: I1205 07:05:56.075541 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e7eb-account-create-update-wjj2g" event={"ID":"42c4e4b4-a803-47f4-99eb-fb15b65b82b2","Type":"ContainerDied","Data":"2588118c8f724767c81041c6d0223d7bf9ca8adeb4a04b8f033b9b3ce59c1085"} Dec 05 07:05:56 crc kubenswrapper[4780]: I1205 07:05:56.076943 4780 generic.go:334] "Generic (PLEG): container finished" podID="9221549a-ed1e-4bfc-8bf7-6ecaec0c2069" containerID="0628596ee4bfc90e900abe0430b5bf28210f42e1675a435b855a7d6fae706283" exitCode=0 Dec 05 07:05:56 crc kubenswrapper[4780]: I1205 07:05:56.076975 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lmg7j" event={"ID":"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069","Type":"ContainerDied","Data":"0628596ee4bfc90e900abe0430b5bf28210f42e1675a435b855a7d6fae706283"} Dec 05 07:05:56 crc kubenswrapper[4780]: I1205 07:05:56.076992 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lmg7j" event={"ID":"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069","Type":"ContainerStarted","Data":"c3b45be39ba18afec54462e82ad5ad5fab6d8352f05ee6bbc7e6f11ff17c0a82"} Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.526901 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lmg7j" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.546464 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-operator-scripts\") pod \"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069\" (UID: \"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.546738 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8kcp\" (UniqueName: \"kubernetes.io/projected/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-kube-api-access-z8kcp\") pod \"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069\" (UID: \"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.549955 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9221549a-ed1e-4bfc-8bf7-6ecaec0c2069" (UID: "9221549a-ed1e-4bfc-8bf7-6ecaec0c2069"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.557304 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-kube-api-access-z8kcp" (OuterVolumeSpecName: "kube-api-access-z8kcp") pod "9221549a-ed1e-4bfc-8bf7-6ecaec0c2069" (UID: "9221549a-ed1e-4bfc-8bf7-6ecaec0c2069"). InnerVolumeSpecName "kube-api-access-z8kcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.649186 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8kcp\" (UniqueName: \"kubernetes.io/projected/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-kube-api-access-z8kcp\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.649218 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.690345 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ccmqk" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.701060 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7eb-account-create-update-wjj2g" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.712742 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.750515 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-operator-scripts\") pod \"42c4e4b4-a803-47f4-99eb-fb15b65b82b2\" (UID: \"42c4e4b4-a803-47f4-99eb-fb15b65b82b2\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.750589 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-dispersionconf\") pod \"d536c619-112b-48c1-8efe-2e700ead9f8b\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.750613 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-ring-data-devices\") pod \"d536c619-112b-48c1-8efe-2e700ead9f8b\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.750652 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkp5v\" (UniqueName: \"kubernetes.io/projected/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-kube-api-access-pkp5v\") pod \"42c4e4b4-a803-47f4-99eb-fb15b65b82b2\" (UID: \"42c4e4b4-a803-47f4-99eb-fb15b65b82b2\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.750686 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d536c619-112b-48c1-8efe-2e700ead9f8b-etc-swift\") pod \"d536c619-112b-48c1-8efe-2e700ead9f8b\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.750711 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11a6aab-3b40-43bd-bdd6-3fc630277d49-operator-scripts\") pod \"d11a6aab-3b40-43bd-bdd6-3fc630277d49\" (UID: \"d11a6aab-3b40-43bd-bdd6-3fc630277d49\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.751026 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42c4e4b4-a803-47f4-99eb-fb15b65b82b2" (UID: "42c4e4b4-a803-47f4-99eb-fb15b65b82b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.751118 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-scripts\") pod \"d536c619-112b-48c1-8efe-2e700ead9f8b\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.751143 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-combined-ca-bundle\") pod \"d536c619-112b-48c1-8efe-2e700ead9f8b\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.751167 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-swiftconf\") pod \"d536c619-112b-48c1-8efe-2e700ead9f8b\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.751195 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snzg7\" (UniqueName: \"kubernetes.io/projected/d11a6aab-3b40-43bd-bdd6-3fc630277d49-kube-api-access-snzg7\") pod \"d11a6aab-3b40-43bd-bdd6-3fc630277d49\" (UID: \"d11a6aab-3b40-43bd-bdd6-3fc630277d49\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.751214 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njql4\" (UniqueName: \"kubernetes.io/projected/d536c619-112b-48c1-8efe-2e700ead9f8b-kube-api-access-njql4\") pod \"d536c619-112b-48c1-8efe-2e700ead9f8b\" (UID: \"d536c619-112b-48c1-8efe-2e700ead9f8b\") " Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.751400 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d536c619-112b-48c1-8efe-2e700ead9f8b" (UID: "d536c619-112b-48c1-8efe-2e700ead9f8b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.751629 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.751645 4780 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.751861 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11a6aab-3b40-43bd-bdd6-3fc630277d49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d11a6aab-3b40-43bd-bdd6-3fc630277d49" (UID: "d11a6aab-3b40-43bd-bdd6-3fc630277d49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.752150 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d536c619-112b-48c1-8efe-2e700ead9f8b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d536c619-112b-48c1-8efe-2e700ead9f8b" (UID: "d536c619-112b-48c1-8efe-2e700ead9f8b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.755177 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11a6aab-3b40-43bd-bdd6-3fc630277d49-kube-api-access-snzg7" (OuterVolumeSpecName: "kube-api-access-snzg7") pod "d11a6aab-3b40-43bd-bdd6-3fc630277d49" (UID: "d11a6aab-3b40-43bd-bdd6-3fc630277d49"). InnerVolumeSpecName "kube-api-access-snzg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.755230 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-kube-api-access-pkp5v" (OuterVolumeSpecName: "kube-api-access-pkp5v") pod "42c4e4b4-a803-47f4-99eb-fb15b65b82b2" (UID: "42c4e4b4-a803-47f4-99eb-fb15b65b82b2"). InnerVolumeSpecName "kube-api-access-pkp5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.756077 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d536c619-112b-48c1-8efe-2e700ead9f8b-kube-api-access-njql4" (OuterVolumeSpecName: "kube-api-access-njql4") pod "d536c619-112b-48c1-8efe-2e700ead9f8b" (UID: "d536c619-112b-48c1-8efe-2e700ead9f8b"). InnerVolumeSpecName "kube-api-access-njql4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.761171 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d536c619-112b-48c1-8efe-2e700ead9f8b" (UID: "d536c619-112b-48c1-8efe-2e700ead9f8b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.771910 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-scripts" (OuterVolumeSpecName: "scripts") pod "d536c619-112b-48c1-8efe-2e700ead9f8b" (UID: "d536c619-112b-48c1-8efe-2e700ead9f8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.773837 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d536c619-112b-48c1-8efe-2e700ead9f8b" (UID: "d536c619-112b-48c1-8efe-2e700ead9f8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.775173 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d536c619-112b-48c1-8efe-2e700ead9f8b" (UID: "d536c619-112b-48c1-8efe-2e700ead9f8b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.853731 4780 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.853763 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkp5v\" (UniqueName: \"kubernetes.io/projected/42c4e4b4-a803-47f4-99eb-fb15b65b82b2-kube-api-access-pkp5v\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.853772 4780 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d536c619-112b-48c1-8efe-2e700ead9f8b-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.853782 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11a6aab-3b40-43bd-bdd6-3fc630277d49-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.853792 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d536c619-112b-48c1-8efe-2e700ead9f8b-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.853801 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.853810 4780 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d536c619-112b-48c1-8efe-2e700ead9f8b-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.853818 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snzg7\" (UniqueName: \"kubernetes.io/projected/d11a6aab-3b40-43bd-bdd6-3fc630277d49-kube-api-access-snzg7\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:57 crc kubenswrapper[4780]: I1205 07:05:57.853826 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njql4\" (UniqueName: \"kubernetes.io/projected/d536c619-112b-48c1-8efe-2e700ead9f8b-kube-api-access-njql4\") on node \"crc\" DevicePath \"\"" Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.111390 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lmg7j" event={"ID":"9221549a-ed1e-4bfc-8bf7-6ecaec0c2069","Type":"ContainerDied","Data":"c3b45be39ba18afec54462e82ad5ad5fab6d8352f05ee6bbc7e6f11ff17c0a82"} Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.111438 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b45be39ba18afec54462e82ad5ad5fab6d8352f05ee6bbc7e6f11ff17c0a82" Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.111522 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lmg7j" Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.126079 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jzmzj" Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.126089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jzmzj" event={"ID":"d536c619-112b-48c1-8efe-2e700ead9f8b","Type":"ContainerDied","Data":"8a76bfea7a1776561a6e743e84d28c80fcb560ea3464b6ef1db13ae5a737cf9b"} Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.126128 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a76bfea7a1776561a6e743e84d28c80fcb560ea3464b6ef1db13ae5a737cf9b" Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.128964 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ccmqk" event={"ID":"d11a6aab-3b40-43bd-bdd6-3fc630277d49","Type":"ContainerDied","Data":"bc43cca0b5bb45efacad54446d84b50f09ab5db95950f762e7682628105e11b3"} Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.129015 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc43cca0b5bb45efacad54446d84b50f09ab5db95950f762e7682628105e11b3" Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.129089 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ccmqk" Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.131579 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e7eb-account-create-update-wjj2g" event={"ID":"42c4e4b4-a803-47f4-99eb-fb15b65b82b2","Type":"ContainerDied","Data":"15f6ee79f3629bbe75c060a327dfdb6879f7c38ec05bc92c0139c4a7a8828fc9"} Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.131608 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15f6ee79f3629bbe75c060a327dfdb6879f7c38ec05bc92c0139c4a7a8828fc9" Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.131634 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7eb-account-create-update-wjj2g" Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.877589 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fs2vs" podUID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" containerName="ovn-controller" probeResult="failure" output=< Dec 05 07:05:58 crc kubenswrapper[4780]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 05 07:05:58 crc kubenswrapper[4780]: > Dec 05 07:05:58 crc kubenswrapper[4780]: I1205 07:05:58.922921 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.153954 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fs2vs-config-tb46f"] Dec 05 07:05:59 crc kubenswrapper[4780]: E1205 07:05:59.154329 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c4e4b4-a803-47f4-99eb-fb15b65b82b2" containerName="mariadb-account-create-update" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.154346 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c4e4b4-a803-47f4-99eb-fb15b65b82b2" containerName="mariadb-account-create-update" Dec 05 07:05:59 crc kubenswrapper[4780]: E1205 07:05:59.154378 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11a6aab-3b40-43bd-bdd6-3fc630277d49" containerName="mariadb-database-create" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.154384 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11a6aab-3b40-43bd-bdd6-3fc630277d49" containerName="mariadb-database-create" Dec 05 07:05:59 crc kubenswrapper[4780]: E1205 07:05:59.154396 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d536c619-112b-48c1-8efe-2e700ead9f8b" containerName="swift-ring-rebalance" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.154402 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d536c619-112b-48c1-8efe-2e700ead9f8b" containerName="swift-ring-rebalance" Dec 05 07:05:59 crc kubenswrapper[4780]: E1205 07:05:59.154417 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9221549a-ed1e-4bfc-8bf7-6ecaec0c2069" containerName="mariadb-database-create" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.154422 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9221549a-ed1e-4bfc-8bf7-6ecaec0c2069" containerName="mariadb-database-create" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.154582 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11a6aab-3b40-43bd-bdd6-3fc630277d49" containerName="mariadb-database-create" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.154590 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c4e4b4-a803-47f4-99eb-fb15b65b82b2" containerName="mariadb-account-create-update" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.154598 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9221549a-ed1e-4bfc-8bf7-6ecaec0c2069" containerName="mariadb-database-create" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.154609 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d536c619-112b-48c1-8efe-2e700ead9f8b" containerName="swift-ring-rebalance" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.155176 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.159718 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.164474 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fs2vs-config-tb46f"] Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.191671 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-scripts\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.191733 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-additional-scripts\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.191754 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6srn8\" (UniqueName: \"kubernetes.io/projected/2da3efa2-205e-4feb-9c5e-e8938ca71e72-kube-api-access-6srn8\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.191769 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.191816 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-log-ovn\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.192613 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run-ovn\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.294941 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run-ovn\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.295030 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-scripts\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.295073 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-additional-scripts\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.295095 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6srn8\" (UniqueName: \"kubernetes.io/projected/2da3efa2-205e-4feb-9c5e-e8938ca71e72-kube-api-access-6srn8\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.295112 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.295144 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-log-ovn\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.295352 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run-ovn\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.295402 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-log-ovn\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.295724 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.295865 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-additional-scripts\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.297217 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-scripts\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.316544 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6srn8\" (UniqueName: \"kubernetes.io/projected/2da3efa2-205e-4feb-9c5e-e8938ca71e72-kube-api-access-6srn8\") pod \"ovn-controller-fs2vs-config-tb46f\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.476443 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.911188 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.911579 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:05:59 crc kubenswrapper[4780]: I1205 07:05:59.934668 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fs2vs-config-tb46f"] Dec 05 07:06:00 crc kubenswrapper[4780]: I1205 07:06:00.151041 4780 generic.go:334] "Generic (PLEG): container finished" podID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" containerID="edf3bc8a8d63ed0f663b4a238ab5c8946207e1f70506209b7613ac8e39b9757a" exitCode=0 Dec 05 07:06:00 crc kubenswrapper[4780]: I1205 07:06:00.151333 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85","Type":"ContainerDied","Data":"edf3bc8a8d63ed0f663b4a238ab5c8946207e1f70506209b7613ac8e39b9757a"} Dec 05 07:06:00 crc kubenswrapper[4780]: I1205 07:06:00.158075 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fs2vs-config-tb46f" event={"ID":"2da3efa2-205e-4feb-9c5e-e8938ca71e72","Type":"ContainerStarted","Data":"2b52ffd87730fb38fce079f4e18d7796d7d658e96f09a8c7158f9727580e94b2"} Dec 05 07:06:00 crc kubenswrapper[4780]: E1205 07:06:00.976407 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2da3efa2_205e_4feb_9c5e_e8938ca71e72.slice/crio-a8e973ca6d8a4a47061fb448fffbaf4efe94cfce4348cf8c675d7acfd4e3fd45.scope\": RecentStats: unable to find data in memory cache]" Dec 05 07:06:01 crc kubenswrapper[4780]: I1205 07:06:01.194315 4780 generic.go:334] "Generic (PLEG): container finished" podID="2da3efa2-205e-4feb-9c5e-e8938ca71e72" containerID="a8e973ca6d8a4a47061fb448fffbaf4efe94cfce4348cf8c675d7acfd4e3fd45" exitCode=0 Dec 05 07:06:01 crc kubenswrapper[4780]: I1205 07:06:01.194395 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fs2vs-config-tb46f" event={"ID":"2da3efa2-205e-4feb-9c5e-e8938ca71e72","Type":"ContainerDied","Data":"a8e973ca6d8a4a47061fb448fffbaf4efe94cfce4348cf8c675d7acfd4e3fd45"} Dec 05 07:06:01 crc kubenswrapper[4780]: I1205 07:06:01.197311 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85","Type":"ContainerStarted","Data":"530ca0e5e1babec81694e9c8b93e6ef4428014a40ae28b2534d753e93c90ee0d"} Dec 05 07:06:01 crc kubenswrapper[4780]: I1205 07:06:01.197541 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:06:01 crc kubenswrapper[4780]: I1205 07:06:01.262987 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.674292374 podStartE2EDuration="1m22.262941604s" podCreationTimestamp="2025-12-05 07:04:39 +0000 UTC" firstStartedPulling="2025-12-05 07:04:42.011778927 +0000 UTC m=+1116.081295259" lastFinishedPulling="2025-12-05 07:05:26.600428157 +0000 UTC m=+1160.669944489" observedRunningTime="2025-12-05 07:06:01.254724692 +0000 UTC m=+1195.324241044" watchObservedRunningTime="2025-12-05 07:06:01.262941604 +0000 UTC m=+1195.332457946" Dec 05 07:06:03 crc kubenswrapper[4780]: I1205 07:06:03.870563 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-fs2vs" Dec 05 07:06:04 crc kubenswrapper[4780]: I1205 07:06:04.244663 4780 generic.go:334] "Generic (PLEG): container finished" podID="f5032d09-8298-4941-8b4b-0f24a57b8ced" containerID="c0c14a6a851b055628b01b03335bcd98a85ff1861eeb6688a554462156bea99f" exitCode=0 Dec 05 07:06:04 crc kubenswrapper[4780]: I1205 07:06:04.244713 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5032d09-8298-4941-8b4b-0f24a57b8ced","Type":"ContainerDied","Data":"c0c14a6a851b055628b01b03335bcd98a85ff1861eeb6688a554462156bea99f"} Dec 05 07:06:09 crc kubenswrapper[4780]: I1205 07:06:09.661568 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:06:09 crc kubenswrapper[4780]: I1205 07:06:09.669011 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") pod \"swift-storage-0\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " pod="openstack/swift-storage-0" Dec 05 07:06:09 crc kubenswrapper[4780]: I1205 07:06:09.705065 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 07:06:11 crc kubenswrapper[4780]: I1205 07:06:11.175458 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.402041 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fs2vs-config-tb46f" event={"ID":"2da3efa2-205e-4feb-9c5e-e8938ca71e72","Type":"ContainerDied","Data":"2b52ffd87730fb38fce079f4e18d7796d7d658e96f09a8c7158f9727580e94b2"} Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.402680 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b52ffd87730fb38fce079f4e18d7796d7d658e96f09a8c7158f9727580e94b2" Dec 05 07:06:17 crc kubenswrapper[4780]: E1205 07:06:17.444796 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:481073ac9deefb38bbd989aaa8dd7aedb4e0af26017f4883f85fce433380bf63" Dec 05 07:06:17 crc kubenswrapper[4780]: E1205 07:06:17.445347 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:481073ac9deefb38bbd989aaa8dd7aedb4e0af26017f4883f85fce433380bf63,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rnb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-bsjtr_openstack(c79e9679-696f-498c-a1c0-d2d465c637fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:06:17 crc kubenswrapper[4780]: E1205 07:06:17.449750 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-bsjtr" podUID="c79e9679-696f-498c-a1c0-d2d465c637fd" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.491967 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.612482 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-additional-scripts\") pod \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.612540 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run\") pod \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.612564 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6srn8\" (UniqueName: \"kubernetes.io/projected/2da3efa2-205e-4feb-9c5e-e8938ca71e72-kube-api-access-6srn8\") pod \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.612666 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-scripts\") pod \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.612691 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-log-ovn\") pod \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.612726 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run-ovn\") pod \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\" (UID: \"2da3efa2-205e-4feb-9c5e-e8938ca71e72\") " Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.613114 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2da3efa2-205e-4feb-9c5e-e8938ca71e72" (UID: "2da3efa2-205e-4feb-9c5e-e8938ca71e72"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.613148 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run" (OuterVolumeSpecName: "var-run") pod "2da3efa2-205e-4feb-9c5e-e8938ca71e72" (UID: "2da3efa2-205e-4feb-9c5e-e8938ca71e72"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.613585 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2da3efa2-205e-4feb-9c5e-e8938ca71e72" (UID: "2da3efa2-205e-4feb-9c5e-e8938ca71e72"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.613794 4780 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.613817 4780 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.613849 4780 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2da3efa2-205e-4feb-9c5e-e8938ca71e72-var-run\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.614499 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2da3efa2-205e-4feb-9c5e-e8938ca71e72" (UID: "2da3efa2-205e-4feb-9c5e-e8938ca71e72"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.615523 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-scripts" (OuterVolumeSpecName: "scripts") pod "2da3efa2-205e-4feb-9c5e-e8938ca71e72" (UID: "2da3efa2-205e-4feb-9c5e-e8938ca71e72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.617672 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da3efa2-205e-4feb-9c5e-e8938ca71e72-kube-api-access-6srn8" (OuterVolumeSpecName: "kube-api-access-6srn8") pod "2da3efa2-205e-4feb-9c5e-e8938ca71e72" (UID: "2da3efa2-205e-4feb-9c5e-e8938ca71e72"). InnerVolumeSpecName "kube-api-access-6srn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.715155 4780 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.715480 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6srn8\" (UniqueName: \"kubernetes.io/projected/2da3efa2-205e-4feb-9c5e-e8938ca71e72-kube-api-access-6srn8\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.715490 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2da3efa2-205e-4feb-9c5e-e8938ca71e72-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:17 crc kubenswrapper[4780]: I1205 07:06:17.918525 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 05 07:06:17 crc kubenswrapper[4780]: W1205 07:06:17.922571 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3782dca9_a617_47a2_9f89_96ba82200899.slice/crio-1524a52df1ae1b2390ea7df5c95ed11ba26ce692fc8c6eaf90cf7bb6bc40dd8a WatchSource:0}: Error finding container 1524a52df1ae1b2390ea7df5c95ed11ba26ce692fc8c6eaf90cf7bb6bc40dd8a: Status 404 returned error can't find the container with id 1524a52df1ae1b2390ea7df5c95ed11ba26ce692fc8c6eaf90cf7bb6bc40dd8a Dec 05 07:06:18 crc kubenswrapper[4780]: I1205 07:06:18.410499 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5032d09-8298-4941-8b4b-0f24a57b8ced","Type":"ContainerStarted","Data":"5450b625e2fd6628a65ff330106c052fa609d51529eba7c91d50eb2a2c2bfed0"} Dec 05 07:06:18 crc kubenswrapper[4780]: I1205 07:06:18.410726 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 05 07:06:18 crc kubenswrapper[4780]: I1205 07:06:18.412670 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"1524a52df1ae1b2390ea7df5c95ed11ba26ce692fc8c6eaf90cf7bb6bc40dd8a"} Dec 05 07:06:18 crc kubenswrapper[4780]: I1205 07:06:18.412703 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fs2vs-config-tb46f" Dec 05 07:06:18 crc kubenswrapper[4780]: E1205 07:06:18.414184 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:481073ac9deefb38bbd989aaa8dd7aedb4e0af26017f4883f85fce433380bf63\\\"\"" pod="openstack/glance-db-sync-bsjtr" podUID="c79e9679-696f-498c-a1c0-d2d465c637fd" Dec 05 07:06:18 crc kubenswrapper[4780]: I1205 07:06:18.439638 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371936.415155 podStartE2EDuration="1m40.439620548s" podCreationTimestamp="2025-12-05 07:04:38 +0000 UTC" firstStartedPulling="2025-12-05 07:04:40.877723407 +0000 UTC m=+1114.947239739" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:18.431425136 +0000 UTC m=+1212.500941478" watchObservedRunningTime="2025-12-05 07:06:18.439620548 +0000 UTC m=+1212.509136880" Dec 05 07:06:18 crc kubenswrapper[4780]: I1205 07:06:18.589408 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fs2vs-config-tb46f"] Dec 05 07:06:18 crc kubenswrapper[4780]: I1205 07:06:18.597642 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fs2vs-config-tb46f"] Dec 05 07:06:20 crc kubenswrapper[4780]: I1205 07:06:20.151261 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da3efa2-205e-4feb-9c5e-e8938ca71e72" path="/var/lib/kubelet/pods/2da3efa2-205e-4feb-9c5e-e8938ca71e72/volumes" Dec 05 07:06:23 crc kubenswrapper[4780]: I1205 07:06:23.448778 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"986982b89c1a073d0278b92d9a5c06cd37c1b46a70abd7b84b4c71b882785ce2"} Dec 05 07:06:23 crc kubenswrapper[4780]: I1205 07:06:23.449353 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"2277d9a62f83380fb829fd9437023d5d7a6a251cbed820feb5e9eaa847bd436b"} Dec 05 07:06:23 crc kubenswrapper[4780]: I1205 07:06:23.449364 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"197ec1cb2e42b7eee2a07e5abda174f729b39f1a30d1ee19cdf7fc349964c7dc"} Dec 05 07:06:24 crc kubenswrapper[4780]: I1205 07:06:24.457436 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"01c18b876687d6b2d2dd2ebf76cfbc99348debda854332c91ee5bc6b8029eb69"} Dec 05 07:06:28 crc kubenswrapper[4780]: I1205 07:06:28.490341 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"82189ab9e1551dc0f5140613417ba953bda692d106179c487635cae012edccd5"} Dec 05 07:06:28 crc kubenswrapper[4780]: I1205 07:06:28.491000 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"cf24f362016fc9a9b24061e240eb546322a73d461803e71a142f2ecfcd0d8c78"} Dec 05 07:06:29 crc kubenswrapper[4780]: I1205 07:06:29.907592 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:06:29 crc kubenswrapper[4780]: I1205 07:06:29.908010 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.320104 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.542119 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"f9700fc7cbaed727d4ba60770c2a1911c7899565ee6e7549689b2e0350e69b87"} Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.542170 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"9225d4a3e6c1922ea527f674b2f5bde35d6a0f6a680ffb6e130f1dab6447551d"} Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.733939 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wdgqc"] Dec 05 07:06:30 crc kubenswrapper[4780]: E1205 07:06:30.734567 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da3efa2-205e-4feb-9c5e-e8938ca71e72" containerName="ovn-config" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.734580 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da3efa2-205e-4feb-9c5e-e8938ca71e72" containerName="ovn-config" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.734752 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da3efa2-205e-4feb-9c5e-e8938ca71e72" containerName="ovn-config" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.735262 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wdgqc" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.749368 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-796f-account-create-update-dc2hl"] Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.750704 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-796f-account-create-update-dc2hl" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.755215 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.758147 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-796f-account-create-update-dc2hl"] Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.764951 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wdgqc"] Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.830406 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cb8n\" (UniqueName: \"kubernetes.io/projected/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-kube-api-access-7cb8n\") pod \"barbican-796f-account-create-update-dc2hl\" (UID: \"d6a6b6eb-0713-42a6-ad08-582ea6d835cb\") " pod="openstack/barbican-796f-account-create-update-dc2hl" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.830450 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-operator-scripts\") pod \"barbican-796f-account-create-update-dc2hl\" (UID: \"d6a6b6eb-0713-42a6-ad08-582ea6d835cb\") " pod="openstack/barbican-796f-account-create-update-dc2hl" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.830471 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-operator-scripts\") pod \"cinder-db-create-wdgqc\" (UID: \"e4c96ddb-0a87-4ae3-8676-03bc8afaf100\") " pod="openstack/cinder-db-create-wdgqc" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.830505 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf24q\" (UniqueName: \"kubernetes.io/projected/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-kube-api-access-kf24q\") pod \"cinder-db-create-wdgqc\" (UID: \"e4c96ddb-0a87-4ae3-8676-03bc8afaf100\") " pod="openstack/cinder-db-create-wdgqc" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.835943 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-qqtqb"] Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.836936 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qqtqb" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.847902 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1be5-account-create-update-bj4nb"] Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.848914 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1be5-account-create-update-bj4nb" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.861449 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.879438 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1be5-account-create-update-bj4nb"] Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.932675 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-operator-scripts\") pod \"cinder-db-create-wdgqc\" (UID: \"e4c96ddb-0a87-4ae3-8676-03bc8afaf100\") " pod="openstack/cinder-db-create-wdgqc" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.933347 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf24q\" (UniqueName: \"kubernetes.io/projected/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-kube-api-access-kf24q\") pod \"cinder-db-create-wdgqc\" (UID: \"e4c96ddb-0a87-4ae3-8676-03bc8afaf100\") " pod="openstack/cinder-db-create-wdgqc" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.933678 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cb8n\" (UniqueName: \"kubernetes.io/projected/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-kube-api-access-7cb8n\") pod \"barbican-796f-account-create-update-dc2hl\" (UID: \"d6a6b6eb-0713-42a6-ad08-582ea6d835cb\") " pod="openstack/barbican-796f-account-create-update-dc2hl" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.933735 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-operator-scripts\") pod \"barbican-796f-account-create-update-dc2hl\" (UID: \"d6a6b6eb-0713-42a6-ad08-582ea6d835cb\") " pod="openstack/barbican-796f-account-create-update-dc2hl" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.933782 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-operator-scripts\") pod \"cinder-db-create-wdgqc\" (UID: \"e4c96ddb-0a87-4ae3-8676-03bc8afaf100\") " pod="openstack/cinder-db-create-wdgqc" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.934593 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-operator-scripts\") pod \"barbican-796f-account-create-update-dc2hl\" (UID: \"d6a6b6eb-0713-42a6-ad08-582ea6d835cb\") " pod="openstack/barbican-796f-account-create-update-dc2hl" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.970426 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-qqtqb"] Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.974694 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf24q\" (UniqueName: \"kubernetes.io/projected/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-kube-api-access-kf24q\") pod \"cinder-db-create-wdgqc\" (UID: \"e4c96ddb-0a87-4ae3-8676-03bc8afaf100\") " pod="openstack/cinder-db-create-wdgqc" Dec 05 07:06:30 crc kubenswrapper[4780]: I1205 07:06:30.998098 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cb8n\" (UniqueName: \"kubernetes.io/projected/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-kube-api-access-7cb8n\") pod \"barbican-796f-account-create-update-dc2hl\" (UID: \"d6a6b6eb-0713-42a6-ad08-582ea6d835cb\") " pod="openstack/barbican-796f-account-create-update-dc2hl" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.038946 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-operator-scripts\") pod \"cinder-1be5-account-create-update-bj4nb\" (UID: \"11b3d441-7101-46f0-8bcc-5ae9352dfa6c\") " pod="openstack/cinder-1be5-account-create-update-bj4nb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.039000 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5b62\" (UniqueName: \"kubernetes.io/projected/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-kube-api-access-g5b62\") pod \"barbican-db-create-qqtqb\" (UID: \"29cd460d-e210-4a2d-9199-ecf32fbd3fb6\") " pod="openstack/barbican-db-create-qqtqb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.039043 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s9pn\" (UniqueName: \"kubernetes.io/projected/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-kube-api-access-8s9pn\") pod \"cinder-1be5-account-create-update-bj4nb\" (UID: \"11b3d441-7101-46f0-8bcc-5ae9352dfa6c\") " pod="openstack/cinder-1be5-account-create-update-bj4nb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.039073 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-operator-scripts\") pod \"barbican-db-create-qqtqb\" (UID: \"29cd460d-e210-4a2d-9199-ecf32fbd3fb6\") " pod="openstack/barbican-db-create-qqtqb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.063929 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vv8kw"] Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.065081 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vv8kw" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.087197 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wdgqc" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.091853 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a927-account-create-update-dmw95"] Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.099955 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vv8kw"] Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.100084 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a927-account-create-update-dmw95" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.108836 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-796f-account-create-update-dc2hl" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.109291 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a927-account-create-update-dmw95"] Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.117361 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.140323 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s9pn\" (UniqueName: \"kubernetes.io/projected/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-kube-api-access-8s9pn\") pod \"cinder-1be5-account-create-update-bj4nb\" (UID: \"11b3d441-7101-46f0-8bcc-5ae9352dfa6c\") " pod="openstack/cinder-1be5-account-create-update-bj4nb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.140381 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-operator-scripts\") pod \"barbican-db-create-qqtqb\" (UID: \"29cd460d-e210-4a2d-9199-ecf32fbd3fb6\") " pod="openstack/barbican-db-create-qqtqb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.140403 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54659268-947f-4e6d-8b41-3fc32e830ed3-operator-scripts\") pod \"neutron-db-create-vv8kw\" (UID: \"54659268-947f-4e6d-8b41-3fc32e830ed3\") " pod="openstack/neutron-db-create-vv8kw" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.140444 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rml7\" (UniqueName: \"kubernetes.io/projected/a7a5cb7e-5c29-4b34-9260-436b933dc431-kube-api-access-8rml7\") pod \"neutron-a927-account-create-update-dmw95\" (UID: \"a7a5cb7e-5c29-4b34-9260-436b933dc431\") " pod="openstack/neutron-a927-account-create-update-dmw95" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.140506 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-operator-scripts\") pod \"cinder-1be5-account-create-update-bj4nb\" (UID: \"11b3d441-7101-46f0-8bcc-5ae9352dfa6c\") " pod="openstack/cinder-1be5-account-create-update-bj4nb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.140531 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5b62\" (UniqueName: \"kubernetes.io/projected/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-kube-api-access-g5b62\") pod \"barbican-db-create-qqtqb\" (UID: \"29cd460d-e210-4a2d-9199-ecf32fbd3fb6\") " pod="openstack/barbican-db-create-qqtqb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.140564 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7a5cb7e-5c29-4b34-9260-436b933dc431-operator-scripts\") pod \"neutron-a927-account-create-update-dmw95\" (UID: \"a7a5cb7e-5c29-4b34-9260-436b933dc431\") " pod="openstack/neutron-a927-account-create-update-dmw95" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.140599 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khz9m\" (UniqueName: \"kubernetes.io/projected/54659268-947f-4e6d-8b41-3fc32e830ed3-kube-api-access-khz9m\") pod \"neutron-db-create-vv8kw\" (UID: \"54659268-947f-4e6d-8b41-3fc32e830ed3\") " pod="openstack/neutron-db-create-vv8kw" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.141607 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-operator-scripts\") pod \"barbican-db-create-qqtqb\" (UID: \"29cd460d-e210-4a2d-9199-ecf32fbd3fb6\") " pod="openstack/barbican-db-create-qqtqb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.142070 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-operator-scripts\") pod \"cinder-1be5-account-create-update-bj4nb\" (UID: \"11b3d441-7101-46f0-8bcc-5ae9352dfa6c\") " pod="openstack/cinder-1be5-account-create-update-bj4nb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.167394 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-jjpc8"] Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.168394 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.171788 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s9pn\" (UniqueName: \"kubernetes.io/projected/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-kube-api-access-8s9pn\") pod \"cinder-1be5-account-create-update-bj4nb\" (UID: \"11b3d441-7101-46f0-8bcc-5ae9352dfa6c\") " pod="openstack/cinder-1be5-account-create-update-bj4nb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.185903 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.186040 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8svcl" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.186264 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.186418 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.205286 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1be5-account-create-update-bj4nb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.213574 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5b62\" (UniqueName: \"kubernetes.io/projected/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-kube-api-access-g5b62\") pod \"barbican-db-create-qqtqb\" (UID: \"29cd460d-e210-4a2d-9199-ecf32fbd3fb6\") " pod="openstack/barbican-db-create-qqtqb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.216612 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jjpc8"] Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.247598 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rml7\" (UniqueName: \"kubernetes.io/projected/a7a5cb7e-5c29-4b34-9260-436b933dc431-kube-api-access-8rml7\") pod \"neutron-a927-account-create-update-dmw95\" (UID: \"a7a5cb7e-5c29-4b34-9260-436b933dc431\") " pod="openstack/neutron-a927-account-create-update-dmw95" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.247695 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7a5cb7e-5c29-4b34-9260-436b933dc431-operator-scripts\") pod \"neutron-a927-account-create-update-dmw95\" (UID: \"a7a5cb7e-5c29-4b34-9260-436b933dc431\") " pod="openstack/neutron-a927-account-create-update-dmw95" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.247729 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khz9m\" (UniqueName: \"kubernetes.io/projected/54659268-947f-4e6d-8b41-3fc32e830ed3-kube-api-access-khz9m\") pod \"neutron-db-create-vv8kw\" (UID: \"54659268-947f-4e6d-8b41-3fc32e830ed3\") " pod="openstack/neutron-db-create-vv8kw" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.247780 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54659268-947f-4e6d-8b41-3fc32e830ed3-operator-scripts\") pod \"neutron-db-create-vv8kw\" (UID: \"54659268-947f-4e6d-8b41-3fc32e830ed3\") " pod="openstack/neutron-db-create-vv8kw" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.248468 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54659268-947f-4e6d-8b41-3fc32e830ed3-operator-scripts\") pod \"neutron-db-create-vv8kw\" (UID: \"54659268-947f-4e6d-8b41-3fc32e830ed3\") " pod="openstack/neutron-db-create-vv8kw" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.249625 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7a5cb7e-5c29-4b34-9260-436b933dc431-operator-scripts\") pod \"neutron-a927-account-create-update-dmw95\" (UID: \"a7a5cb7e-5c29-4b34-9260-436b933dc431\") " pod="openstack/neutron-a927-account-create-update-dmw95" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.274952 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khz9m\" (UniqueName: \"kubernetes.io/projected/54659268-947f-4e6d-8b41-3fc32e830ed3-kube-api-access-khz9m\") pod \"neutron-db-create-vv8kw\" (UID: \"54659268-947f-4e6d-8b41-3fc32e830ed3\") " pod="openstack/neutron-db-create-vv8kw" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.282466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rml7\" (UniqueName: \"kubernetes.io/projected/a7a5cb7e-5c29-4b34-9260-436b933dc431-kube-api-access-8rml7\") pod \"neutron-a927-account-create-update-dmw95\" (UID: \"a7a5cb7e-5c29-4b34-9260-436b933dc431\") " pod="openstack/neutron-a927-account-create-update-dmw95" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.384526 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vv8kw" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.451062 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-config-data\") pod \"keystone-db-sync-jjpc8\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.451163 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9vf\" (UniqueName: \"kubernetes.io/projected/503f38d6-82f5-473e-9c59-2c32d8b8f855-kube-api-access-7l9vf\") pod \"keystone-db-sync-jjpc8\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.451336 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-combined-ca-bundle\") pod \"keystone-db-sync-jjpc8\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.456905 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a927-account-create-update-dmw95" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.464131 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qqtqb" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.554270 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l9vf\" (UniqueName: \"kubernetes.io/projected/503f38d6-82f5-473e-9c59-2c32d8b8f855-kube-api-access-7l9vf\") pod \"keystone-db-sync-jjpc8\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.554578 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-combined-ca-bundle\") pod \"keystone-db-sync-jjpc8\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.554675 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-config-data\") pod \"keystone-db-sync-jjpc8\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.569699 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-combined-ca-bundle\") pod \"keystone-db-sync-jjpc8\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.573145 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-config-data\") pod \"keystone-db-sync-jjpc8\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.588693 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l9vf\" (UniqueName: \"kubernetes.io/projected/503f38d6-82f5-473e-9c59-2c32d8b8f855-kube-api-access-7l9vf\") pod \"keystone-db-sync-jjpc8\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.823590 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:31 crc kubenswrapper[4780]: I1205 07:06:31.947460 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1be5-account-create-update-bj4nb"] Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.012920 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-796f-account-create-update-dc2hl"] Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.021651 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-qqtqb"] Dec 05 07:06:32 crc kubenswrapper[4780]: W1205 07:06:32.028432 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6a6b6eb_0713_42a6_ad08_582ea6d835cb.slice/crio-641491c26b58463ee23d3df220e2c7ccedbe6e6ec7191742b41e5d668cb61223 WatchSource:0}: Error finding container 641491c26b58463ee23d3df220e2c7ccedbe6e6ec7191742b41e5d668cb61223: Status 404 returned error can't find the container with id 641491c26b58463ee23d3df220e2c7ccedbe6e6ec7191742b41e5d668cb61223 Dec 05 07:06:32 crc kubenswrapper[4780]: W1205 07:06:32.029861 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4c96ddb_0a87_4ae3_8676_03bc8afaf100.slice/crio-412a42c99fff37ae4cd88487131c8758a28b7217a1e8aad49a0c66f1aa00f11b WatchSource:0}: Error finding container 412a42c99fff37ae4cd88487131c8758a28b7217a1e8aad49a0c66f1aa00f11b: Status 404 returned error can't find the container with id 412a42c99fff37ae4cd88487131c8758a28b7217a1e8aad49a0c66f1aa00f11b Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.040422 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wdgqc"] Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.095571 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vv8kw"] Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.202136 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jjpc8"] Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.258497 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a927-account-create-update-dmw95"] Dec 05 07:06:32 crc kubenswrapper[4780]: W1205 07:06:32.415869 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod503f38d6_82f5_473e_9c59_2c32d8b8f855.slice/crio-90cb00ddb71a2079dd5863b55c17ac36cc68de3b90b56a2f400bfbd359b956b7 WatchSource:0}: Error finding container 90cb00ddb71a2079dd5863b55c17ac36cc68de3b90b56a2f400bfbd359b956b7: Status 404 returned error can't find the container with id 90cb00ddb71a2079dd5863b55c17ac36cc68de3b90b56a2f400bfbd359b956b7 Dec 05 07:06:32 crc kubenswrapper[4780]: W1205 07:06:32.422246 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7a5cb7e_5c29_4b34_9260_436b933dc431.slice/crio-cb81fa364e817d8fa569b76a24b63377e639556dbb19d3dd91d48c85375fde47 WatchSource:0}: Error finding container cb81fa364e817d8fa569b76a24b63377e639556dbb19d3dd91d48c85375fde47: Status 404 returned error can't find the container with id cb81fa364e817d8fa569b76a24b63377e639556dbb19d3dd91d48c85375fde47 Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.574192 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qqtqb" event={"ID":"29cd460d-e210-4a2d-9199-ecf32fbd3fb6","Type":"ContainerStarted","Data":"83730f5eb0b60e51d8e61a192f862609e541d39cebe09c73a2ca53d506721475"} Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.578683 4780 generic.go:334] "Generic (PLEG): container finished" podID="11b3d441-7101-46f0-8bcc-5ae9352dfa6c" containerID="c430401df565c59426a009b5d9663962ceeacfa3becd8357efdae4e100ab7a21" exitCode=0 Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.578775 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1be5-account-create-update-bj4nb" event={"ID":"11b3d441-7101-46f0-8bcc-5ae9352dfa6c","Type":"ContainerDied","Data":"c430401df565c59426a009b5d9663962ceeacfa3becd8357efdae4e100ab7a21"} Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.578802 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1be5-account-create-update-bj4nb" event={"ID":"11b3d441-7101-46f0-8bcc-5ae9352dfa6c","Type":"ContainerStarted","Data":"848d38687408f6bc330272304fc75ed5d64b7fcc0f9e6e997c52dfcaaf77b168"} Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.580689 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjpc8" event={"ID":"503f38d6-82f5-473e-9c59-2c32d8b8f855","Type":"ContainerStarted","Data":"90cb00ddb71a2079dd5863b55c17ac36cc68de3b90b56a2f400bfbd359b956b7"} Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.582288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-796f-account-create-update-dc2hl" event={"ID":"d6a6b6eb-0713-42a6-ad08-582ea6d835cb","Type":"ContainerStarted","Data":"641491c26b58463ee23d3df220e2c7ccedbe6e6ec7191742b41e5d668cb61223"} Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.589346 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vv8kw" event={"ID":"54659268-947f-4e6d-8b41-3fc32e830ed3","Type":"ContainerStarted","Data":"d602dc235f94b7095877e443356745943f9fae6e5670bfa9c47ed1c147d967b5"} Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.608231 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wdgqc" event={"ID":"e4c96ddb-0a87-4ae3-8676-03bc8afaf100","Type":"ContainerStarted","Data":"412a42c99fff37ae4cd88487131c8758a28b7217a1e8aad49a0c66f1aa00f11b"} Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.621244 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bsjtr" event={"ID":"c79e9679-696f-498c-a1c0-d2d465c637fd","Type":"ContainerStarted","Data":"feabab19310c27909296902614d550ac38609227b665b0437e180098bac35617"} Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.634852 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a927-account-create-update-dmw95" event={"ID":"a7a5cb7e-5c29-4b34-9260-436b933dc431","Type":"ContainerStarted","Data":"cb81fa364e817d8fa569b76a24b63377e639556dbb19d3dd91d48c85375fde47"} Dec 05 07:06:32 crc kubenswrapper[4780]: I1205 07:06:32.649382 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bsjtr" podStartSLOduration=3.862794534 podStartE2EDuration="38.64935894s" podCreationTimestamp="2025-12-05 07:05:54 +0000 UTC" firstStartedPulling="2025-12-05 07:05:55.810160868 +0000 UTC m=+1189.879677200" lastFinishedPulling="2025-12-05 07:06:30.596725284 +0000 UTC m=+1224.666241606" observedRunningTime="2025-12-05 07:06:32.648345153 +0000 UTC m=+1226.717861495" watchObservedRunningTime="2025-12-05 07:06:32.64935894 +0000 UTC m=+1226.718875272" Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.647539 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"960fcbfc395793c247badb8fdde1b5984a271aa278bc9c88ddf14fc26e90bea3"} Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.648254 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"81ec73c2cb79b863b687984909f69f4090990583d64f1c7fd7e543c07d0c1a61"} Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.649966 4780 generic.go:334] "Generic (PLEG): container finished" podID="e4c96ddb-0a87-4ae3-8676-03bc8afaf100" containerID="8ef8574b8ff939edd05fbd221ac45e6aba76f762b7fac2adf4514b0ac08a5360" exitCode=0 Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.650093 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wdgqc" event={"ID":"e4c96ddb-0a87-4ae3-8676-03bc8afaf100","Type":"ContainerDied","Data":"8ef8574b8ff939edd05fbd221ac45e6aba76f762b7fac2adf4514b0ac08a5360"} Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.651848 4780 generic.go:334] "Generic (PLEG): container finished" podID="a7a5cb7e-5c29-4b34-9260-436b933dc431" containerID="6b826cf4b10deb459d0f861eec1f359220f15430f6daae85fcd2dacae45fa23b" exitCode=0 Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.651932 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a927-account-create-update-dmw95" event={"ID":"a7a5cb7e-5c29-4b34-9260-436b933dc431","Type":"ContainerDied","Data":"6b826cf4b10deb459d0f861eec1f359220f15430f6daae85fcd2dacae45fa23b"} Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.658805 4780 generic.go:334] "Generic (PLEG): container finished" podID="29cd460d-e210-4a2d-9199-ecf32fbd3fb6" containerID="f30601955e94c94d570bb7c9b2b21396a74650848a8837a8160a47406b522208" exitCode=0 Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.658959 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qqtqb" event={"ID":"29cd460d-e210-4a2d-9199-ecf32fbd3fb6","Type":"ContainerDied","Data":"f30601955e94c94d570bb7c9b2b21396a74650848a8837a8160a47406b522208"} Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.663316 4780 generic.go:334] "Generic (PLEG): container finished" podID="d6a6b6eb-0713-42a6-ad08-582ea6d835cb" containerID="6e7ced7781b6a5e33d2240103aeec9a5ca2b777888c9cdb8cf8f640a4e31d624" exitCode=0 Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.663440 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-796f-account-create-update-dc2hl" event={"ID":"d6a6b6eb-0713-42a6-ad08-582ea6d835cb","Type":"ContainerDied","Data":"6e7ced7781b6a5e33d2240103aeec9a5ca2b777888c9cdb8cf8f640a4e31d624"} Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.665673 4780 generic.go:334] "Generic (PLEG): container finished" podID="54659268-947f-4e6d-8b41-3fc32e830ed3" containerID="484669aad9618352bb527faab8116cfefb67c707e8df7bbf51c11d558fc34090" exitCode=0 Dec 05 07:06:33 crc kubenswrapper[4780]: I1205 07:06:33.665718 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vv8kw" event={"ID":"54659268-947f-4e6d-8b41-3fc32e830ed3","Type":"ContainerDied","Data":"484669aad9618352bb527faab8116cfefb67c707e8df7bbf51c11d558fc34090"} Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.060133 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1be5-account-create-update-bj4nb" Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.203410 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s9pn\" (UniqueName: \"kubernetes.io/projected/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-kube-api-access-8s9pn\") pod \"11b3d441-7101-46f0-8bcc-5ae9352dfa6c\" (UID: \"11b3d441-7101-46f0-8bcc-5ae9352dfa6c\") " Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.203991 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-operator-scripts\") pod \"11b3d441-7101-46f0-8bcc-5ae9352dfa6c\" (UID: \"11b3d441-7101-46f0-8bcc-5ae9352dfa6c\") " Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.204996 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11b3d441-7101-46f0-8bcc-5ae9352dfa6c" (UID: "11b3d441-7101-46f0-8bcc-5ae9352dfa6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.211305 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-kube-api-access-8s9pn" (OuterVolumeSpecName: "kube-api-access-8s9pn") pod "11b3d441-7101-46f0-8bcc-5ae9352dfa6c" (UID: "11b3d441-7101-46f0-8bcc-5ae9352dfa6c"). InnerVolumeSpecName "kube-api-access-8s9pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.306775 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.306829 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s9pn\" (UniqueName: \"kubernetes.io/projected/11b3d441-7101-46f0-8bcc-5ae9352dfa6c-kube-api-access-8s9pn\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.678408 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1be5-account-create-update-bj4nb" Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.678412 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1be5-account-create-update-bj4nb" event={"ID":"11b3d441-7101-46f0-8bcc-5ae9352dfa6c","Type":"ContainerDied","Data":"848d38687408f6bc330272304fc75ed5d64b7fcc0f9e6e997c52dfcaaf77b168"} Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.678453 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848d38687408f6bc330272304fc75ed5d64b7fcc0f9e6e997c52dfcaaf77b168" Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.694437 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"735e0179ac8e0b6856304c581093683f8810b9d6725ea83df77678572a5f9297"} Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.695063 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"f81d963680169349f2f9fb3728fe7259c5c1a6a053a4e334e212812bad3a43e5"} Dec 05 07:06:34 crc kubenswrapper[4780]: I1205 07:06:34.695461 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"977b8988c9e4ea255e060358102e1022eac55a01e723c56a5c68e57ee2a94e80"} Dec 05 07:06:35 crc kubenswrapper[4780]: I1205 07:06:35.714099 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"6914c82c6a21ad55bc14f021428bafdc5f53bb59cfc97dcdf06a93af43f76ed4"} Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.314213 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wdgqc" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.317293 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-796f-account-create-update-dc2hl" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.327826 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vv8kw" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.337769 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-operator-scripts\") pod \"e4c96ddb-0a87-4ae3-8676-03bc8afaf100\" (UID: \"e4c96ddb-0a87-4ae3-8676-03bc8afaf100\") " Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.337925 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54659268-947f-4e6d-8b41-3fc32e830ed3-operator-scripts\") pod \"54659268-947f-4e6d-8b41-3fc32e830ed3\" (UID: \"54659268-947f-4e6d-8b41-3fc32e830ed3\") " Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.337965 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cb8n\" (UniqueName: \"kubernetes.io/projected/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-kube-api-access-7cb8n\") pod \"d6a6b6eb-0713-42a6-ad08-582ea6d835cb\" (UID: \"d6a6b6eb-0713-42a6-ad08-582ea6d835cb\") " Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.337996 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khz9m\" (UniqueName: \"kubernetes.io/projected/54659268-947f-4e6d-8b41-3fc32e830ed3-kube-api-access-khz9m\") pod \"54659268-947f-4e6d-8b41-3fc32e830ed3\" (UID: \"54659268-947f-4e6d-8b41-3fc32e830ed3\") " Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.338093 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf24q\" (UniqueName: \"kubernetes.io/projected/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-kube-api-access-kf24q\") pod \"e4c96ddb-0a87-4ae3-8676-03bc8afaf100\" (UID: \"e4c96ddb-0a87-4ae3-8676-03bc8afaf100\") " Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.338130 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-operator-scripts\") pod \"d6a6b6eb-0713-42a6-ad08-582ea6d835cb\" (UID: \"d6a6b6eb-0713-42a6-ad08-582ea6d835cb\") " Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.338768 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6a6b6eb-0713-42a6-ad08-582ea6d835cb" (UID: "d6a6b6eb-0713-42a6-ad08-582ea6d835cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.338774 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54659268-947f-4e6d-8b41-3fc32e830ed3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54659268-947f-4e6d-8b41-3fc32e830ed3" (UID: "54659268-947f-4e6d-8b41-3fc32e830ed3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.338787 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4c96ddb-0a87-4ae3-8676-03bc8afaf100" (UID: "e4c96ddb-0a87-4ae3-8676-03bc8afaf100"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.349776 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54659268-947f-4e6d-8b41-3fc32e830ed3-kube-api-access-khz9m" (OuterVolumeSpecName: "kube-api-access-khz9m") pod "54659268-947f-4e6d-8b41-3fc32e830ed3" (UID: "54659268-947f-4e6d-8b41-3fc32e830ed3"). InnerVolumeSpecName "kube-api-access-khz9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.365637 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-kube-api-access-7cb8n" (OuterVolumeSpecName: "kube-api-access-7cb8n") pod "d6a6b6eb-0713-42a6-ad08-582ea6d835cb" (UID: "d6a6b6eb-0713-42a6-ad08-582ea6d835cb"). InnerVolumeSpecName "kube-api-access-7cb8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.374901 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-kube-api-access-kf24q" (OuterVolumeSpecName: "kube-api-access-kf24q") pod "e4c96ddb-0a87-4ae3-8676-03bc8afaf100" (UID: "e4c96ddb-0a87-4ae3-8676-03bc8afaf100"). InnerVolumeSpecName "kube-api-access-kf24q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.434770 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qqtqb" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.439203 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5b62\" (UniqueName: \"kubernetes.io/projected/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-kube-api-access-g5b62\") pod \"29cd460d-e210-4a2d-9199-ecf32fbd3fb6\" (UID: \"29cd460d-e210-4a2d-9199-ecf32fbd3fb6\") " Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.439601 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-operator-scripts\") pod \"29cd460d-e210-4a2d-9199-ecf32fbd3fb6\" (UID: \"29cd460d-e210-4a2d-9199-ecf32fbd3fb6\") " Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.439957 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a927-account-create-update-dmw95" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.440058 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf24q\" (UniqueName: \"kubernetes.io/projected/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-kube-api-access-kf24q\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.440085 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.440099 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4c96ddb-0a87-4ae3-8676-03bc8afaf100-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.440111 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54659268-947f-4e6d-8b41-3fc32e830ed3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.440124 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cb8n\" (UniqueName: \"kubernetes.io/projected/d6a6b6eb-0713-42a6-ad08-582ea6d835cb-kube-api-access-7cb8n\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.440164 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khz9m\" (UniqueName: \"kubernetes.io/projected/54659268-947f-4e6d-8b41-3fc32e830ed3-kube-api-access-khz9m\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.440179 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29cd460d-e210-4a2d-9199-ecf32fbd3fb6" (UID: "29cd460d-e210-4a2d-9199-ecf32fbd3fb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.442582 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-kube-api-access-g5b62" (OuterVolumeSpecName: "kube-api-access-g5b62") pod "29cd460d-e210-4a2d-9199-ecf32fbd3fb6" (UID: "29cd460d-e210-4a2d-9199-ecf32fbd3fb6"). InnerVolumeSpecName "kube-api-access-g5b62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.541016 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rml7\" (UniqueName: \"kubernetes.io/projected/a7a5cb7e-5c29-4b34-9260-436b933dc431-kube-api-access-8rml7\") pod \"a7a5cb7e-5c29-4b34-9260-436b933dc431\" (UID: \"a7a5cb7e-5c29-4b34-9260-436b933dc431\") " Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.541071 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7a5cb7e-5c29-4b34-9260-436b933dc431-operator-scripts\") pod \"a7a5cb7e-5c29-4b34-9260-436b933dc431\" (UID: \"a7a5cb7e-5c29-4b34-9260-436b933dc431\") " Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.541284 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5b62\" (UniqueName: \"kubernetes.io/projected/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-kube-api-access-g5b62\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.541298 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29cd460d-e210-4a2d-9199-ecf32fbd3fb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.541660 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a5cb7e-5c29-4b34-9260-436b933dc431-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7a5cb7e-5c29-4b34-9260-436b933dc431" (UID: "a7a5cb7e-5c29-4b34-9260-436b933dc431"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.543867 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a5cb7e-5c29-4b34-9260-436b933dc431-kube-api-access-8rml7" (OuterVolumeSpecName: "kube-api-access-8rml7") pod "a7a5cb7e-5c29-4b34-9260-436b933dc431" (UID: "a7a5cb7e-5c29-4b34-9260-436b933dc431"). InnerVolumeSpecName "kube-api-access-8rml7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.643342 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rml7\" (UniqueName: \"kubernetes.io/projected/a7a5cb7e-5c29-4b34-9260-436b933dc431-kube-api-access-8rml7\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.643389 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7a5cb7e-5c29-4b34-9260-436b933dc431-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.748798 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a927-account-create-update-dmw95" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.749148 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a927-account-create-update-dmw95" event={"ID":"a7a5cb7e-5c29-4b34-9260-436b933dc431","Type":"ContainerDied","Data":"cb81fa364e817d8fa569b76a24b63377e639556dbb19d3dd91d48c85375fde47"} Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.749188 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb81fa364e817d8fa569b76a24b63377e639556dbb19d3dd91d48c85375fde47" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.750778 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qqtqb" event={"ID":"29cd460d-e210-4a2d-9199-ecf32fbd3fb6","Type":"ContainerDied","Data":"83730f5eb0b60e51d8e61a192f862609e541d39cebe09c73a2ca53d506721475"} Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.750788 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qqtqb" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.750818 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83730f5eb0b60e51d8e61a192f862609e541d39cebe09c73a2ca53d506721475" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.752384 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-796f-account-create-update-dc2hl" event={"ID":"d6a6b6eb-0713-42a6-ad08-582ea6d835cb","Type":"ContainerDied","Data":"641491c26b58463ee23d3df220e2c7ccedbe6e6ec7191742b41e5d668cb61223"} Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.752431 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="641491c26b58463ee23d3df220e2c7ccedbe6e6ec7191742b41e5d668cb61223" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.752397 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-796f-account-create-update-dc2hl" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.753700 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vv8kw" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.753710 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vv8kw" event={"ID":"54659268-947f-4e6d-8b41-3fc32e830ed3","Type":"ContainerDied","Data":"d602dc235f94b7095877e443356745943f9fae6e5670bfa9c47ed1c147d967b5"} Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.753734 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d602dc235f94b7095877e443356745943f9fae6e5670bfa9c47ed1c147d967b5" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.755209 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wdgqc" event={"ID":"e4c96ddb-0a87-4ae3-8676-03bc8afaf100","Type":"ContainerDied","Data":"412a42c99fff37ae4cd88487131c8758a28b7217a1e8aad49a0c66f1aa00f11b"} Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.755229 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wdgqc" Dec 05 07:06:38 crc kubenswrapper[4780]: I1205 07:06:38.755235 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="412a42c99fff37ae4cd88487131c8758a28b7217a1e8aad49a0c66f1aa00f11b" Dec 05 07:06:39 crc kubenswrapper[4780]: I1205 07:06:39.767722 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerStarted","Data":"21310f93372293bb789a92ae777f6b29d31624531841b6a89f2146486609c159"} Dec 05 07:06:39 crc kubenswrapper[4780]: I1205 07:06:39.770658 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjpc8" event={"ID":"503f38d6-82f5-473e-9c59-2c32d8b8f855","Type":"ContainerStarted","Data":"86e03a93a4034922f11da685b33b905b9b9adf31707d92c9b1446d56127e1bb0"} Dec 05 07:06:39 crc kubenswrapper[4780]: I1205 07:06:39.808929 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=49.176291991 podStartE2EDuration="1m3.808899847s" podCreationTimestamp="2025-12-05 07:05:36 +0000 UTC" firstStartedPulling="2025-12-05 07:06:17.925012111 +0000 UTC m=+1211.994528443" lastFinishedPulling="2025-12-05 07:06:32.557619967 +0000 UTC m=+1226.627136299" observedRunningTime="2025-12-05 07:06:39.799443381 +0000 UTC m=+1233.868959713" watchObservedRunningTime="2025-12-05 07:06:39.808899847 +0000 UTC m=+1233.878416189" Dec 05 07:06:39 crc kubenswrapper[4780]: I1205 07:06:39.813596 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-jjpc8" podStartSLOduration=2.153271118 podStartE2EDuration="8.813578612s" podCreationTimestamp="2025-12-05 07:06:31 +0000 UTC" firstStartedPulling="2025-12-05 07:06:32.422474533 +0000 UTC m=+1226.491990865" lastFinishedPulling="2025-12-05 07:06:39.082782027 +0000 UTC m=+1233.152298359" observedRunningTime="2025-12-05 07:06:39.81271904 +0000 UTC m=+1233.882235392" watchObservedRunningTime="2025-12-05 07:06:39.813578612 +0000 UTC m=+1233.883094944" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.085548 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8567775dfc-bd5zh"] Dec 05 07:06:40 crc kubenswrapper[4780]: E1205 07:06:40.085874 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54659268-947f-4e6d-8b41-3fc32e830ed3" containerName="mariadb-database-create" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.085900 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="54659268-947f-4e6d-8b41-3fc32e830ed3" containerName="mariadb-database-create" Dec 05 07:06:40 crc kubenswrapper[4780]: E1205 07:06:40.085923 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c96ddb-0a87-4ae3-8676-03bc8afaf100" containerName="mariadb-database-create" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.085929 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c96ddb-0a87-4ae3-8676-03bc8afaf100" containerName="mariadb-database-create" Dec 05 07:06:40 crc kubenswrapper[4780]: E1205 07:06:40.085941 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a6b6eb-0713-42a6-ad08-582ea6d835cb" containerName="mariadb-account-create-update" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.085948 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a6b6eb-0713-42a6-ad08-582ea6d835cb" containerName="mariadb-account-create-update" Dec 05 07:06:40 crc kubenswrapper[4780]: E1205 07:06:40.085961 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cd460d-e210-4a2d-9199-ecf32fbd3fb6" containerName="mariadb-database-create" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.085967 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cd460d-e210-4a2d-9199-ecf32fbd3fb6" containerName="mariadb-database-create" Dec 05 07:06:40 crc kubenswrapper[4780]: E1205 07:06:40.086003 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b3d441-7101-46f0-8bcc-5ae9352dfa6c" containerName="mariadb-account-create-update" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.086011 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b3d441-7101-46f0-8bcc-5ae9352dfa6c" containerName="mariadb-account-create-update" Dec 05 07:06:40 crc kubenswrapper[4780]: E1205 07:06:40.086021 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a5cb7e-5c29-4b34-9260-436b933dc431" containerName="mariadb-account-create-update" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.086026 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a5cb7e-5c29-4b34-9260-436b933dc431" containerName="mariadb-account-create-update" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.086177 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c96ddb-0a87-4ae3-8676-03bc8afaf100" containerName="mariadb-database-create" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.086206 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a5cb7e-5c29-4b34-9260-436b933dc431" containerName="mariadb-account-create-update" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.086213 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cd460d-e210-4a2d-9199-ecf32fbd3fb6" containerName="mariadb-database-create" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.086224 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a6b6eb-0713-42a6-ad08-582ea6d835cb" containerName="mariadb-account-create-update" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.086251 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="54659268-947f-4e6d-8b41-3fc32e830ed3" containerName="mariadb-database-create" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.086263 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b3d441-7101-46f0-8bcc-5ae9352dfa6c" containerName="mariadb-account-create-update" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.087117 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.089088 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.096596 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8567775dfc-bd5zh"] Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.164836 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-nb\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.164947 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-sb\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.165002 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql697\" (UniqueName: \"kubernetes.io/projected/939a37e8-bd9d-4684-8596-6b6907ad309b-kube-api-access-ql697\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.165061 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-swift-storage-0\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.165083 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-config\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.165625 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-svc\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.266833 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-svc\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.266978 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-nb\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.267071 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-sb\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.267149 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql697\" (UniqueName: \"kubernetes.io/projected/939a37e8-bd9d-4684-8596-6b6907ad309b-kube-api-access-ql697\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.267201 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-swift-storage-0\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.267224 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-config\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.267893 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-svc\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.267974 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-nb\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.267974 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-sb\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.268246 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-config\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.268260 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-swift-storage-0\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.290343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql697\" (UniqueName: \"kubernetes.io/projected/939a37e8-bd9d-4684-8596-6b6907ad309b-kube-api-access-ql697\") pod \"dnsmasq-dns-8567775dfc-bd5zh\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:40 crc kubenswrapper[4780]: I1205 07:06:40.407454 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:41 crc kubenswrapper[4780]: I1205 07:06:40.867703 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8567775dfc-bd5zh"] Dec 05 07:06:41 crc kubenswrapper[4780]: W1205 07:06:40.876436 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod939a37e8_bd9d_4684_8596_6b6907ad309b.slice/crio-9e71de687d7bccaa694344970c8010c20b8ad8b643fd536b742b2da0fcb69b74 WatchSource:0}: Error finding container 9e71de687d7bccaa694344970c8010c20b8ad8b643fd536b742b2da0fcb69b74: Status 404 returned error can't find the container with id 9e71de687d7bccaa694344970c8010c20b8ad8b643fd536b742b2da0fcb69b74 Dec 05 07:06:41 crc kubenswrapper[4780]: I1205 07:06:41.821311 4780 generic.go:334] "Generic (PLEG): container finished" podID="939a37e8-bd9d-4684-8596-6b6907ad309b" containerID="6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7" exitCode=0 Dec 05 07:06:41 crc kubenswrapper[4780]: I1205 07:06:41.821419 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" event={"ID":"939a37e8-bd9d-4684-8596-6b6907ad309b","Type":"ContainerDied","Data":"6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7"} Dec 05 07:06:41 crc kubenswrapper[4780]: I1205 07:06:41.821738 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" event={"ID":"939a37e8-bd9d-4684-8596-6b6907ad309b","Type":"ContainerStarted","Data":"9e71de687d7bccaa694344970c8010c20b8ad8b643fd536b742b2da0fcb69b74"} Dec 05 07:06:41 crc kubenswrapper[4780]: I1205 07:06:41.824142 4780 generic.go:334] "Generic (PLEG): container finished" podID="c79e9679-696f-498c-a1c0-d2d465c637fd" containerID="feabab19310c27909296902614d550ac38609227b665b0437e180098bac35617" exitCode=0 Dec 05 07:06:41 crc kubenswrapper[4780]: I1205 07:06:41.824180 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bsjtr" event={"ID":"c79e9679-696f-498c-a1c0-d2d465c637fd","Type":"ContainerDied","Data":"feabab19310c27909296902614d550ac38609227b665b0437e180098bac35617"} Dec 05 07:06:42 crc kubenswrapper[4780]: I1205 07:06:42.833757 4780 generic.go:334] "Generic (PLEG): container finished" podID="503f38d6-82f5-473e-9c59-2c32d8b8f855" containerID="86e03a93a4034922f11da685b33b905b9b9adf31707d92c9b1446d56127e1bb0" exitCode=0 Dec 05 07:06:42 crc kubenswrapper[4780]: I1205 07:06:42.833850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjpc8" event={"ID":"503f38d6-82f5-473e-9c59-2c32d8b8f855","Type":"ContainerDied","Data":"86e03a93a4034922f11da685b33b905b9b9adf31707d92c9b1446d56127e1bb0"} Dec 05 07:06:42 crc kubenswrapper[4780]: I1205 07:06:42.836456 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" event={"ID":"939a37e8-bd9d-4684-8596-6b6907ad309b","Type":"ContainerStarted","Data":"81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54"} Dec 05 07:06:42 crc kubenswrapper[4780]: I1205 07:06:42.873042 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" podStartSLOduration=2.873021985 podStartE2EDuration="2.873021985s" podCreationTimestamp="2025-12-05 07:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:42.864919397 +0000 UTC m=+1236.934435729" watchObservedRunningTime="2025-12-05 07:06:42.873021985 +0000 UTC m=+1236.942538317" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.199155 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bsjtr" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.316858 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rnb2\" (UniqueName: \"kubernetes.io/projected/c79e9679-696f-498c-a1c0-d2d465c637fd-kube-api-access-8rnb2\") pod \"c79e9679-696f-498c-a1c0-d2d465c637fd\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.316950 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-config-data\") pod \"c79e9679-696f-498c-a1c0-d2d465c637fd\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.317055 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-db-sync-config-data\") pod \"c79e9679-696f-498c-a1c0-d2d465c637fd\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.317123 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-combined-ca-bundle\") pod \"c79e9679-696f-498c-a1c0-d2d465c637fd\" (UID: \"c79e9679-696f-498c-a1c0-d2d465c637fd\") " Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.323091 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79e9679-696f-498c-a1c0-d2d465c637fd-kube-api-access-8rnb2" (OuterVolumeSpecName: "kube-api-access-8rnb2") pod "c79e9679-696f-498c-a1c0-d2d465c637fd" (UID: "c79e9679-696f-498c-a1c0-d2d465c637fd"). InnerVolumeSpecName "kube-api-access-8rnb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.323742 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c79e9679-696f-498c-a1c0-d2d465c637fd" (UID: "c79e9679-696f-498c-a1c0-d2d465c637fd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.351280 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c79e9679-696f-498c-a1c0-d2d465c637fd" (UID: "c79e9679-696f-498c-a1c0-d2d465c637fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.364477 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-config-data" (OuterVolumeSpecName: "config-data") pod "c79e9679-696f-498c-a1c0-d2d465c637fd" (UID: "c79e9679-696f-498c-a1c0-d2d465c637fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.420047 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.420084 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.420094 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rnb2\" (UniqueName: \"kubernetes.io/projected/c79e9679-696f-498c-a1c0-d2d465c637fd-kube-api-access-8rnb2\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.420105 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79e9679-696f-498c-a1c0-d2d465c637fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.852693 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bsjtr" event={"ID":"c79e9679-696f-498c-a1c0-d2d465c637fd","Type":"ContainerDied","Data":"0aca8841f37cf8659034d2af428b74daa745625212d5b4e31ef47c67fb08ea29"} Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.852756 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aca8841f37cf8659034d2af428b74daa745625212d5b4e31ef47c67fb08ea29" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.852705 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bsjtr" Dec 05 07:06:43 crc kubenswrapper[4780]: I1205 07:06:43.853711 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.071278 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.236067 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-config-data\") pod \"503f38d6-82f5-473e-9c59-2c32d8b8f855\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.236192 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l9vf\" (UniqueName: \"kubernetes.io/projected/503f38d6-82f5-473e-9c59-2c32d8b8f855-kube-api-access-7l9vf\") pod \"503f38d6-82f5-473e-9c59-2c32d8b8f855\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.236302 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-combined-ca-bundle\") pod \"503f38d6-82f5-473e-9c59-2c32d8b8f855\" (UID: \"503f38d6-82f5-473e-9c59-2c32d8b8f855\") " Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.245096 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/503f38d6-82f5-473e-9c59-2c32d8b8f855-kube-api-access-7l9vf" (OuterVolumeSpecName: "kube-api-access-7l9vf") pod "503f38d6-82f5-473e-9c59-2c32d8b8f855" (UID: "503f38d6-82f5-473e-9c59-2c32d8b8f855"). InnerVolumeSpecName "kube-api-access-7l9vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.286130 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8567775dfc-bd5zh"] Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.321210 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fbdccd69c-79pkm"] Dec 05 07:06:44 crc kubenswrapper[4780]: E1205 07:06:44.321594 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="503f38d6-82f5-473e-9c59-2c32d8b8f855" containerName="keystone-db-sync" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.321609 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="503f38d6-82f5-473e-9c59-2c32d8b8f855" containerName="keystone-db-sync" Dec 05 07:06:44 crc kubenswrapper[4780]: E1205 07:06:44.321626 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79e9679-696f-498c-a1c0-d2d465c637fd" containerName="glance-db-sync" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.321634 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79e9679-696f-498c-a1c0-d2d465c637fd" containerName="glance-db-sync" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.321803 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79e9679-696f-498c-a1c0-d2d465c637fd" containerName="glance-db-sync" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.321818 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="503f38d6-82f5-473e-9c59-2c32d8b8f855" containerName="keystone-db-sync" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.322777 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.327395 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "503f38d6-82f5-473e-9c59-2c32d8b8f855" (UID: "503f38d6-82f5-473e-9c59-2c32d8b8f855"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.335721 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fbdccd69c-79pkm"] Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.342687 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l9vf\" (UniqueName: \"kubernetes.io/projected/503f38d6-82f5-473e-9c59-2c32d8b8f855-kube-api-access-7l9vf\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.342866 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.369025 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-config-data" (OuterVolumeSpecName: "config-data") pod "503f38d6-82f5-473e-9c59-2c32d8b8f855" (UID: "503f38d6-82f5-473e-9c59-2c32d8b8f855"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.443735 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-svc\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.443792 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-config\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.444024 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-nb\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.444062 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf767\" (UniqueName: \"kubernetes.io/projected/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-kube-api-access-zf767\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.444144 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-swift-storage-0\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.444200 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-sb\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.444274 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503f38d6-82f5-473e-9c59-2c32d8b8f855-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.546009 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-config\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.546107 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-nb\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.546127 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf767\" (UniqueName: \"kubernetes.io/projected/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-kube-api-access-zf767\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.546161 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-swift-storage-0\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.546472 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-sb\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.546958 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-config\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.547050 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-nb\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.547422 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-sb\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.547482 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-svc\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.547685 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-swift-storage-0\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.548082 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-svc\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.562308 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf767\" (UniqueName: \"kubernetes.io/projected/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-kube-api-access-zf767\") pod \"dnsmasq-dns-7fbdccd69c-79pkm\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.741459 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.887707 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jjpc8" Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.888642 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jjpc8" event={"ID":"503f38d6-82f5-473e-9c59-2c32d8b8f855","Type":"ContainerDied","Data":"90cb00ddb71a2079dd5863b55c17ac36cc68de3b90b56a2f400bfbd359b956b7"} Dec 05 07:06:44 crc kubenswrapper[4780]: I1205 07:06:44.888681 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90cb00ddb71a2079dd5863b55c17ac36cc68de3b90b56a2f400bfbd359b956b7" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.040047 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fbdccd69c-79pkm"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.055774 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jn4xw"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.057869 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.060426 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8svcl" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.060991 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.061290 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.061537 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.063296 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.091407 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jn4xw"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.108914 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64ccc486bf-rknmk"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.110492 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.154618 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64ccc486bf-rknmk"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.158732 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-config-data\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.158803 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-combined-ca-bundle\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.158831 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmcjp\" (UniqueName: \"kubernetes.io/projected/6fb0f320-1194-432e-80eb-c433df8a5257-kube-api-access-xmcjp\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.158869 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-credential-keys\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.158940 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-fernet-keys\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.158960 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-scripts\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.262760 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-scripts\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263077 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48sk9\" (UniqueName: \"kubernetes.io/projected/f552588c-3a90-4a37-b289-602a88b75dee-kube-api-access-48sk9\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263161 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-swift-storage-0\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263227 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-sb\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263301 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-config-data\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263418 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-svc\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263486 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-combined-ca-bundle\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263550 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmcjp\" (UniqueName: \"kubernetes.io/projected/6fb0f320-1194-432e-80eb-c433df8a5257-kube-api-access-xmcjp\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263651 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-nb\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263729 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-config\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263824 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-credential-keys\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.263962 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-fernet-keys\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.279798 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-config-data\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.280338 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-scripts\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.281512 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-combined-ca-bundle\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.293422 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-fernet-keys\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.294067 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fbdccd69c-79pkm"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.294232 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-credential-keys\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.310798 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmcjp\" (UniqueName: \"kubernetes.io/projected/6fb0f320-1194-432e-80eb-c433df8a5257-kube-api-access-xmcjp\") pod \"keystone-bootstrap-jn4xw\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.347524 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.350139 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.355285 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.355483 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.377677 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-svc\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.377775 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-nb\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.377815 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-config\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.378013 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48sk9\" (UniqueName: \"kubernetes.io/projected/f552588c-3a90-4a37-b289-602a88b75dee-kube-api-access-48sk9\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.378078 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-swift-storage-0\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.378105 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-sb\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.378764 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-svc\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.382806 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-sb\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.384739 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-nb\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.386489 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-config\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.387061 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-swift-storage-0\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.405324 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.411353 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.427975 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48sk9\" (UniqueName: \"kubernetes.io/projected/f552588c-3a90-4a37-b289-602a88b75dee-kube-api-access-48sk9\") pod \"dnsmasq-dns-64ccc486bf-rknmk\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.440965 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gqpwk"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.442155 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.449120 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64ccc486bf-rknmk"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.449343 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d6h5f" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.449402 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.449443 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.450384 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.479745 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.479793 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-config-data\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.479822 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-scripts\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.479857 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ldj6\" (UniqueName: \"kubernetes.io/projected/995ff9d4-9da8-471e-a696-aefb4ebbf473-kube-api-access-2ldj6\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.479893 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-run-httpd\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.479979 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-log-httpd\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.480161 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.500574 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-5f9r8"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.501852 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.510806 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-g2dxj" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.511469 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.531995 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mqbgb"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.536792 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.541652 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.542482 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.542828 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mtf4p" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.611150 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ldj6\" (UniqueName: \"kubernetes.io/projected/995ff9d4-9da8-471e-a696-aefb4ebbf473-kube-api-access-2ldj6\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.611217 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-run-httpd\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.611272 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-config\") pod \"neutron-db-sync-gqpwk\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.614568 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-log-httpd\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.614784 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.617993 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mqbgb"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.620535 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-run-httpd\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.620789 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-log-httpd\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.620835 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmhvj\" (UniqueName: \"kubernetes.io/projected/d8ecabbe-038f-4714-b9a1-5f2efef47afd-kube-api-access-zmhvj\") pod \"neutron-db-sync-gqpwk\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.620894 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-combined-ca-bundle\") pod \"neutron-db-sync-gqpwk\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.620994 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.621047 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-config-data\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.621092 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-scripts\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.638753 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5f9r8"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.652986 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.653346 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ldj6\" (UniqueName: \"kubernetes.io/projected/995ff9d4-9da8-471e-a696-aefb4ebbf473-kube-api-access-2ldj6\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.663663 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-scripts\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.667261 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.685896 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-config-data\") pod \"ceilometer-0\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.685989 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79464d554c-c6b8s"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.691611 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.702794 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gqpwk"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.718162 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79464d554c-c6b8s"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.727687 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds5h4\" (UniqueName: \"kubernetes.io/projected/44428dc2-af95-4541-b700-7ac3b81164d5-kube-api-access-ds5h4\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.736892 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44428dc2-af95-4541-b700-7ac3b81164d5-logs\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.737301 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-combined-ca-bundle\") pod \"barbican-db-sync-5f9r8\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.737373 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-config\") pod \"neutron-db-sync-gqpwk\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.737416 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8d2d\" (UniqueName: \"kubernetes.io/projected/a154e1e8-52d0-43c2-8685-cd8769db58d0-kube-api-access-z8d2d\") pod \"barbican-db-sync-5f9r8\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.737473 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-db-sync-config-data\") pod \"barbican-db-sync-5f9r8\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.737544 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-combined-ca-bundle\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.737590 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmhvj\" (UniqueName: \"kubernetes.io/projected/d8ecabbe-038f-4714-b9a1-5f2efef47afd-kube-api-access-zmhvj\") pod \"neutron-db-sync-gqpwk\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.737611 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-scripts\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.737630 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-combined-ca-bundle\") pod \"neutron-db-sync-gqpwk\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.737645 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-config-data\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.743124 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-config\") pod \"neutron-db-sync-gqpwk\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.748356 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-combined-ca-bundle\") pod \"neutron-db-sync-gqpwk\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.755628 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7kv9n"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.760259 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.763688 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.763775 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tcqb6" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.763962 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.764643 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7kv9n"] Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.780043 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmhvj\" (UniqueName: \"kubernetes.io/projected/d8ecabbe-038f-4714-b9a1-5f2efef47afd-kube-api-access-zmhvj\") pod \"neutron-db-sync-gqpwk\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.788091 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.804234 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839212 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-combined-ca-bundle\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-swift-storage-0\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839298 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-config\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839333 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-scripts\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839356 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-config-data\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839389 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-sb\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839412 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-nb\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839436 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds5h4\" (UniqueName: \"kubernetes.io/projected/44428dc2-af95-4541-b700-7ac3b81164d5-kube-api-access-ds5h4\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839475 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44428dc2-af95-4541-b700-7ac3b81164d5-logs\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839500 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-svc\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839565 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl6kn\" (UniqueName: \"kubernetes.io/projected/48088846-ad19-4031-b988-4825d14f503f-kube-api-access-zl6kn\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839600 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-combined-ca-bundle\") pod \"barbican-db-sync-5f9r8\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839642 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8d2d\" (UniqueName: \"kubernetes.io/projected/a154e1e8-52d0-43c2-8685-cd8769db58d0-kube-api-access-z8d2d\") pod \"barbican-db-sync-5f9r8\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.839688 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-db-sync-config-data\") pod \"barbican-db-sync-5f9r8\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.847849 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-db-sync-config-data\") pod \"barbican-db-sync-5f9r8\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.847904 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-combined-ca-bundle\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.851261 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44428dc2-af95-4541-b700-7ac3b81164d5-logs\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.851615 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-scripts\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.862995 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-combined-ca-bundle\") pod \"barbican-db-sync-5f9r8\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.863136 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-config-data\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.866779 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8d2d\" (UniqueName: \"kubernetes.io/projected/a154e1e8-52d0-43c2-8685-cd8769db58d0-kube-api-access-z8d2d\") pod \"barbican-db-sync-5f9r8\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.875758 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds5h4\" (UniqueName: \"kubernetes.io/projected/44428dc2-af95-4541-b700-7ac3b81164d5-kube-api-access-ds5h4\") pod \"placement-db-sync-mqbgb\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.904845 4780 generic.go:334] "Generic (PLEG): container finished" podID="82143c4a-0d37-4c1a-b9a1-e00ef39a7229" containerID="cb56407aa27ef532cc75e01362c476b8388af129b49fd984cf0766923ecf7c1e" exitCode=0 Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.904947 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" event={"ID":"82143c4a-0d37-4c1a-b9a1-e00ef39a7229","Type":"ContainerDied","Data":"cb56407aa27ef532cc75e01362c476b8388af129b49fd984cf0766923ecf7c1e"} Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.904995 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" event={"ID":"82143c4a-0d37-4c1a-b9a1-e00ef39a7229","Type":"ContainerStarted","Data":"86268dd8d15d2b33e815f52a628477f9c755af4347e28aa257f47320e63d4b57"} Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.905322 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" podUID="939a37e8-bd9d-4684-8596-6b6907ad309b" containerName="dnsmasq-dns" containerID="cri-o://81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54" gracePeriod=10 Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942038 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-swift-storage-0\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942092 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-config\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942314 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-sb\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942349 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-nb\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942434 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-svc\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942514 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-config-data\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-db-sync-config-data\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942673 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-scripts\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942758 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl6kn\" (UniqueName: \"kubernetes.io/projected/48088846-ad19-4031-b988-4825d14f503f-kube-api-access-zl6kn\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942781 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-combined-ca-bundle\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.942864 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c67223f0-4471-424c-b74d-886cec703c8a-etc-machine-id\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.943024 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-742f2\" (UniqueName: \"kubernetes.io/projected/c67223f0-4471-424c-b74d-886cec703c8a-kube-api-access-742f2\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.945297 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-config\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.945799 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-svc\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.945895 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-sb\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.946749 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-swift-storage-0\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.947069 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-nb\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:45 crc kubenswrapper[4780]: I1205 07:06:45.977269 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl6kn\" (UniqueName: \"kubernetes.io/projected/48088846-ad19-4031-b988-4825d14f503f-kube-api-access-zl6kn\") pod \"dnsmasq-dns-79464d554c-c6b8s\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.016462 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.049208 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-742f2\" (UniqueName: \"kubernetes.io/projected/c67223f0-4471-424c-b74d-886cec703c8a-kube-api-access-742f2\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.049680 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-config-data\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.049715 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-db-sync-config-data\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.049754 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-scripts\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.049789 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-combined-ca-bundle\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.049837 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c67223f0-4471-424c-b74d-886cec703c8a-etc-machine-id\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.049957 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c67223f0-4471-424c-b74d-886cec703c8a-etc-machine-id\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.054113 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64ccc486bf-rknmk"] Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.056874 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-config-data\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.059248 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-scripts\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.059957 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-db-sync-config-data\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.061064 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jn4xw"] Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.061306 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-combined-ca-bundle\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: W1205 07:06:46.069388 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf552588c_3a90_4a37_b289_602a88b75dee.slice/crio-12f4807d4e54d7374ecfd7faf4d99c7d6134ac761f6befe47d849025db7ef6b3 WatchSource:0}: Error finding container 12f4807d4e54d7374ecfd7faf4d99c7d6134ac761f6befe47d849025db7ef6b3: Status 404 returned error can't find the container with id 12f4807d4e54d7374ecfd7faf4d99c7d6134ac761f6befe47d849025db7ef6b3 Dec 05 07:06:46 crc kubenswrapper[4780]: W1205 07:06:46.073434 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb0f320_1194_432e_80eb_c433df8a5257.slice/crio-1d4c5b29be40a792aedac87211ae536946f07c2caa4b4650c39d66b444d5ef7b WatchSource:0}: Error finding container 1d4c5b29be40a792aedac87211ae536946f07c2caa4b4650c39d66b444d5ef7b: Status 404 returned error can't find the container with id 1d4c5b29be40a792aedac87211ae536946f07c2caa4b4650c39d66b444d5ef7b Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.073472 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-742f2\" (UniqueName: \"kubernetes.io/projected/c67223f0-4471-424c-b74d-886cec703c8a-kube-api-access-742f2\") pod \"cinder-db-sync-7kv9n\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.132524 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.142349 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.159995 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mqbgb" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.246558 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.256291 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.260670 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gkns2" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.260929 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.261043 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.298085 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.356764 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.391594 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.393074 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.396177 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.420952 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.462173 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.462253 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.462278 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.462295 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mrh\" (UniqueName: \"kubernetes.io/projected/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-kube-api-access-78mrh\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.462361 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-logs\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.462390 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.462407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.566838 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.566962 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.566999 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78mrh\" (UniqueName: \"kubernetes.io/projected/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-kube-api-access-78mrh\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567053 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-844n6\" (UniqueName: \"kubernetes.io/projected/cfdf34f2-3849-458f-9039-ee28fe0b998a-kube-api-access-844n6\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567093 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-logs\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567120 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567146 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567169 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567242 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-logs\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567277 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567300 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567333 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.567367 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.569276 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.569688 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-logs\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.571385 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.588472 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.595581 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.610595 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.621675 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78mrh\" (UniqueName: \"kubernetes.io/projected/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-kube-api-access-78mrh\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.662307 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gqpwk"] Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.672309 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.672388 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.672485 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-844n6\" (UniqueName: \"kubernetes.io/projected/cfdf34f2-3849-458f-9039-ee28fe0b998a-kube-api-access-844n6\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.672518 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-logs\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.672539 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.672555 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.672578 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.673100 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.673923 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-logs\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.674368 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.680072 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.771057 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.771736 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.782691 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-844n6\" (UniqueName: \"kubernetes.io/projected/cfdf34f2-3849-458f-9039-ee28fe0b998a-kube-api-access-844n6\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.790557 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.825828 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.834755 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.838457 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79464d554c-c6b8s"] Dec 05 07:06:46 crc kubenswrapper[4780]: W1205 07:06:46.870269 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48088846_ad19_4031_b988_4825d14f503f.slice/crio-7d83678b73b6c6f0e033a6ded2fd4c7d1e3e12d87aeaaf7d743cdd5ff572b301 WatchSource:0}: Error finding container 7d83678b73b6c6f0e033a6ded2fd4c7d1e3e12d87aeaaf7d743cdd5ff572b301: Status 404 returned error can't find the container with id 7d83678b73b6c6f0e033a6ded2fd4c7d1e3e12d87aeaaf7d743cdd5ff572b301 Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.870435 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.887656 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf767\" (UniqueName: \"kubernetes.io/projected/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-kube-api-access-zf767\") pod \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.887700 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-swift-storage-0\") pod \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.887744 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-sb\") pod \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.887868 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-nb\") pod \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.889131 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-config\") pod \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.889193 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-svc\") pod \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\" (UID: \"82143c4a-0d37-4c1a-b9a1-e00ef39a7229\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.909628 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.910176 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-kube-api-access-zf767" (OuterVolumeSpecName: "kube-api-access-zf767") pod "82143c4a-0d37-4c1a-b9a1-e00ef39a7229" (UID: "82143c4a-0d37-4c1a-b9a1-e00ef39a7229"). InnerVolumeSpecName "kube-api-access-zf767". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.916692 4780 generic.go:334] "Generic (PLEG): container finished" podID="f552588c-3a90-4a37-b289-602a88b75dee" containerID="f9755e6a79217d9894dc7058874a9ab10b434bad3d0d65380b863819e356ded8" exitCode=0 Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.916891 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" event={"ID":"f552588c-3a90-4a37-b289-602a88b75dee","Type":"ContainerDied","Data":"f9755e6a79217d9894dc7058874a9ab10b434bad3d0d65380b863819e356ded8"} Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.916968 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" event={"ID":"f552588c-3a90-4a37-b289-602a88b75dee","Type":"ContainerStarted","Data":"12f4807d4e54d7374ecfd7faf4d99c7d6134ac761f6befe47d849025db7ef6b3"} Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.922467 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" event={"ID":"82143c4a-0d37-4c1a-b9a1-e00ef39a7229","Type":"ContainerDied","Data":"86268dd8d15d2b33e815f52a628477f9c755af4347e28aa257f47320e63d4b57"} Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.922528 4780 scope.go:117] "RemoveContainer" containerID="cb56407aa27ef532cc75e01362c476b8388af129b49fd984cf0766923ecf7c1e" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.922713 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fbdccd69c-79pkm" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.931421 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995ff9d4-9da8-471e-a696-aefb4ebbf473","Type":"ContainerStarted","Data":"1b9077dfaeb311f6a15a565961ca83e218259a9c45b418a2bb9443da9fc1a323"} Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.942077 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" event={"ID":"48088846-ad19-4031-b988-4825d14f503f","Type":"ContainerStarted","Data":"7d83678b73b6c6f0e033a6ded2fd4c7d1e3e12d87aeaaf7d743cdd5ff572b301"} Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.942472 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-config" (OuterVolumeSpecName: "config") pod "82143c4a-0d37-4c1a-b9a1-e00ef39a7229" (UID: "82143c4a-0d37-4c1a-b9a1-e00ef39a7229"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.952427 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jn4xw" event={"ID":"6fb0f320-1194-432e-80eb-c433df8a5257","Type":"ContainerStarted","Data":"748cfaf03a9adcb58c631cc8c43a8c70f9648ada77462ecef8c4354ad0cb4038"} Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.952487 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jn4xw" event={"ID":"6fb0f320-1194-432e-80eb-c433df8a5257","Type":"ContainerStarted","Data":"1d4c5b29be40a792aedac87211ae536946f07c2caa4b4650c39d66b444d5ef7b"} Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.956903 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82143c4a-0d37-4c1a-b9a1-e00ef39a7229" (UID: "82143c4a-0d37-4c1a-b9a1-e00ef39a7229"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.956966 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82143c4a-0d37-4c1a-b9a1-e00ef39a7229" (UID: "82143c4a-0d37-4c1a-b9a1-e00ef39a7229"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.959860 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gqpwk" event={"ID":"d8ecabbe-038f-4714-b9a1-5f2efef47afd","Type":"ContainerStarted","Data":"8032c2681e5d0b3b5078b4138f13134697a37751c650aad9edd41872d5616aae"} Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.975114 4780 generic.go:334] "Generic (PLEG): container finished" podID="939a37e8-bd9d-4684-8596-6b6907ad309b" containerID="81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54" exitCode=0 Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.975163 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" event={"ID":"939a37e8-bd9d-4684-8596-6b6907ad309b","Type":"ContainerDied","Data":"81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54"} Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.975188 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" event={"ID":"939a37e8-bd9d-4684-8596-6b6907ad309b","Type":"ContainerDied","Data":"9e71de687d7bccaa694344970c8010c20b8ad8b643fd536b742b2da0fcb69b74"} Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.975249 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8567775dfc-bd5zh" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.978374 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82143c4a-0d37-4c1a-b9a1-e00ef39a7229" (UID: "82143c4a-0d37-4c1a-b9a1-e00ef39a7229"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.980455 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jn4xw" podStartSLOduration=1.9804324960000002 podStartE2EDuration="1.980432496s" podCreationTimestamp="2025-12-05 07:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:46.974346173 +0000 UTC m=+1241.043862505" watchObservedRunningTime="2025-12-05 07:06:46.980432496 +0000 UTC m=+1241.049948828" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.986676 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82143c4a-0d37-4c1a-b9a1-e00ef39a7229" (UID: "82143c4a-0d37-4c1a-b9a1-e00ef39a7229"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.990490 4780 scope.go:117] "RemoveContainer" containerID="81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.991441 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql697\" (UniqueName: \"kubernetes.io/projected/939a37e8-bd9d-4684-8596-6b6907ad309b-kube-api-access-ql697\") pod \"939a37e8-bd9d-4684-8596-6b6907ad309b\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.991493 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-config\") pod \"939a37e8-bd9d-4684-8596-6b6907ad309b\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.991580 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-sb\") pod \"939a37e8-bd9d-4684-8596-6b6907ad309b\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.991618 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-svc\") pod \"939a37e8-bd9d-4684-8596-6b6907ad309b\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.991702 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-nb\") pod \"939a37e8-bd9d-4684-8596-6b6907ad309b\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.991780 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-swift-storage-0\") pod \"939a37e8-bd9d-4684-8596-6b6907ad309b\" (UID: \"939a37e8-bd9d-4684-8596-6b6907ad309b\") " Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.992124 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.992141 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf767\" (UniqueName: \"kubernetes.io/projected/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-kube-api-access-zf767\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.992151 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.992160 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.992175 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:46 crc kubenswrapper[4780]: I1205 07:06:46.992186 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82143c4a-0d37-4c1a-b9a1-e00ef39a7229-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.011835 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939a37e8-bd9d-4684-8596-6b6907ad309b-kube-api-access-ql697" (OuterVolumeSpecName: "kube-api-access-ql697") pod "939a37e8-bd9d-4684-8596-6b6907ad309b" (UID: "939a37e8-bd9d-4684-8596-6b6907ad309b"). InnerVolumeSpecName "kube-api-access-ql697". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.033351 4780 scope.go:117] "RemoveContainer" containerID="6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.081480 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-config" (OuterVolumeSpecName: "config") pod "939a37e8-bd9d-4684-8596-6b6907ad309b" (UID: "939a37e8-bd9d-4684-8596-6b6907ad309b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.097479 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql697\" (UniqueName: \"kubernetes.io/projected/939a37e8-bd9d-4684-8596-6b6907ad309b-kube-api-access-ql697\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.097527 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.104560 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "939a37e8-bd9d-4684-8596-6b6907ad309b" (UID: "939a37e8-bd9d-4684-8596-6b6907ad309b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.106271 4780 scope.go:117] "RemoveContainer" containerID="81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54" Dec 05 07:06:47 crc kubenswrapper[4780]: E1205 07:06:47.110140 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54\": container with ID starting with 81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54 not found: ID does not exist" containerID="81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.110185 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54"} err="failed to get container status \"81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54\": rpc error: code = NotFound desc = could not find container \"81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54\": container with ID starting with 81f49b3c0f882b9a6251b02b629011f97a3528e53427cb25e420771339a3ef54 not found: ID does not exist" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.110211 4780 scope.go:117] "RemoveContainer" containerID="6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7" Dec 05 07:06:47 crc kubenswrapper[4780]: E1205 07:06:47.117573 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7\": container with ID starting with 6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7 not found: ID does not exist" containerID="6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.117609 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7"} err="failed to get container status \"6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7\": rpc error: code = NotFound desc = could not find container \"6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7\": container with ID starting with 6aa7244df46fe0f76390df4494ac74745debe0922e385b6e9ed88376d42cb2c7 not found: ID does not exist" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.118614 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "939a37e8-bd9d-4684-8596-6b6907ad309b" (UID: "939a37e8-bd9d-4684-8596-6b6907ad309b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.156650 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "939a37e8-bd9d-4684-8596-6b6907ad309b" (UID: "939a37e8-bd9d-4684-8596-6b6907ad309b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.162531 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.167575 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "939a37e8-bd9d-4684-8596-6b6907ad309b" (UID: "939a37e8-bd9d-4684-8596-6b6907ad309b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.178948 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5f9r8"] Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.190652 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7kv9n"] Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.198965 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.198991 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.199003 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.199012 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/939a37e8-bd9d-4684-8596-6b6907ad309b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.201601 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mqbgb"] Dec 05 07:06:47 crc kubenswrapper[4780]: W1205 07:06:47.253851 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc67223f0_4471_424c_b74d_886cec703c8a.slice/crio-0e568bb81bf8491fd1ed16da236217b69aacf9199e164da5e07ac532d3aa70c5 WatchSource:0}: Error finding container 0e568bb81bf8491fd1ed16da236217b69aacf9199e164da5e07ac532d3aa70c5: Status 404 returned error can't find the container with id 0e568bb81bf8491fd1ed16da236217b69aacf9199e164da5e07ac532d3aa70c5 Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.473170 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.521123 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fbdccd69c-79pkm"] Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.545133 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fbdccd69c-79pkm"] Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.558988 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8567775dfc-bd5zh"] Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.567020 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8567775dfc-bd5zh"] Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.607871 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-swift-storage-0\") pod \"f552588c-3a90-4a37-b289-602a88b75dee\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.608370 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-sb\") pod \"f552588c-3a90-4a37-b289-602a88b75dee\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.608414 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-nb\") pod \"f552588c-3a90-4a37-b289-602a88b75dee\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.608475 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-config\") pod \"f552588c-3a90-4a37-b289-602a88b75dee\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.608509 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-svc\") pod \"f552588c-3a90-4a37-b289-602a88b75dee\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.608586 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48sk9\" (UniqueName: \"kubernetes.io/projected/f552588c-3a90-4a37-b289-602a88b75dee-kube-api-access-48sk9\") pod \"f552588c-3a90-4a37-b289-602a88b75dee\" (UID: \"f552588c-3a90-4a37-b289-602a88b75dee\") " Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.609242 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.619931 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f552588c-3a90-4a37-b289-602a88b75dee-kube-api-access-48sk9" (OuterVolumeSpecName: "kube-api-access-48sk9") pod "f552588c-3a90-4a37-b289-602a88b75dee" (UID: "f552588c-3a90-4a37-b289-602a88b75dee"). InnerVolumeSpecName "kube-api-access-48sk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.632790 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f552588c-3a90-4a37-b289-602a88b75dee" (UID: "f552588c-3a90-4a37-b289-602a88b75dee"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.644390 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-config" (OuterVolumeSpecName: "config") pod "f552588c-3a90-4a37-b289-602a88b75dee" (UID: "f552588c-3a90-4a37-b289-602a88b75dee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.644579 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f552588c-3a90-4a37-b289-602a88b75dee" (UID: "f552588c-3a90-4a37-b289-602a88b75dee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.648871 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f552588c-3a90-4a37-b289-602a88b75dee" (UID: "f552588c-3a90-4a37-b289-602a88b75dee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.660218 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f552588c-3a90-4a37-b289-602a88b75dee" (UID: "f552588c-3a90-4a37-b289-602a88b75dee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.717761 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.718635 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.718819 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.719256 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48sk9\" (UniqueName: \"kubernetes.io/projected/f552588c-3a90-4a37-b289-602a88b75dee-kube-api-access-48sk9\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.719354 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.719903 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f552588c-3a90-4a37-b289-602a88b75dee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:47 crc kubenswrapper[4780]: I1205 07:06:47.973728 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:06:47 crc kubenswrapper[4780]: W1205 07:06:47.984845 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfdf34f2_3849_458f_9039_ee28fe0b998a.slice/crio-85b465f5398988ce51d136d5640f5c08fc6d89d9c75cbd473a26f946af6cfedb WatchSource:0}: Error finding container 85b465f5398988ce51d136d5640f5c08fc6d89d9c75cbd473a26f946af6cfedb: Status 404 returned error can't find the container with id 85b465f5398988ce51d136d5640f5c08fc6d89d9c75cbd473a26f946af6cfedb Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.029325 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" event={"ID":"f552588c-3a90-4a37-b289-602a88b75dee","Type":"ContainerDied","Data":"12f4807d4e54d7374ecfd7faf4d99c7d6134ac761f6befe47d849025db7ef6b3"} Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.029575 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64ccc486bf-rknmk" Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.029613 4780 scope.go:117] "RemoveContainer" containerID="f9755e6a79217d9894dc7058874a9ab10b434bad3d0d65380b863819e356ded8" Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.036331 4780 generic.go:334] "Generic (PLEG): container finished" podID="48088846-ad19-4031-b988-4825d14f503f" containerID="86cae831a23053daa2cbd1948458e97fdbd3340f44657f9ce62799dfac4d38b1" exitCode=0 Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.036413 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" event={"ID":"48088846-ad19-4031-b988-4825d14f503f","Type":"ContainerDied","Data":"86cae831a23053daa2cbd1948458e97fdbd3340f44657f9ce62799dfac4d38b1"} Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.074487 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f2f67e3-1f38-4301-a83a-482a3cc49c0e","Type":"ContainerStarted","Data":"1f07f5ce929a213dfea6e8e8b6bbb022f7b9a401eaaf9d2599c5721e0e4193b5"} Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.078139 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5f9r8" event={"ID":"a154e1e8-52d0-43c2-8685-cd8769db58d0","Type":"ContainerStarted","Data":"f06e09fc35c4d336404fccf3dea7b74b20124dce6865979db667632f3fb6db80"} Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.080443 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gqpwk" event={"ID":"d8ecabbe-038f-4714-b9a1-5f2efef47afd","Type":"ContainerStarted","Data":"17c12e148aee39d277dd21f751e37c01d7142872900b50c7990ad1ae85ded518"} Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.092425 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7kv9n" event={"ID":"c67223f0-4471-424c-b74d-886cec703c8a","Type":"ContainerStarted","Data":"0e568bb81bf8491fd1ed16da236217b69aacf9199e164da5e07ac532d3aa70c5"} Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.121490 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mqbgb" event={"ID":"44428dc2-af95-4541-b700-7ac3b81164d5","Type":"ContainerStarted","Data":"a2d0ec2907a5dab1762ce2556bea52e4ebc24e4524d98deac46212e6bf4ae6f2"} Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.139460 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gqpwk" podStartSLOduration=3.139439878 podStartE2EDuration="3.139439878s" podCreationTimestamp="2025-12-05 07:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:48.11284776 +0000 UTC m=+1242.182364092" watchObservedRunningTime="2025-12-05 07:06:48.139439878 +0000 UTC m=+1242.208956210" Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.181175 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82143c4a-0d37-4c1a-b9a1-e00ef39a7229" path="/var/lib/kubelet/pods/82143c4a-0d37-4c1a-b9a1-e00ef39a7229/volumes" Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.198422 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939a37e8-bd9d-4684-8596-6b6907ad309b" path="/var/lib/kubelet/pods/939a37e8-bd9d-4684-8596-6b6907ad309b/volumes" Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.201181 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64ccc486bf-rknmk"] Dec 05 07:06:48 crc kubenswrapper[4780]: I1205 07:06:48.201318 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64ccc486bf-rknmk"] Dec 05 07:06:49 crc kubenswrapper[4780]: I1205 07:06:49.167680 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f2f67e3-1f38-4301-a83a-482a3cc49c0e","Type":"ContainerStarted","Data":"d7c55e89701ff02da48f0e1e974d5df5bb85eda65db9027fb6647858fe5c5b66"} Dec 05 07:06:49 crc kubenswrapper[4780]: I1205 07:06:49.181973 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cfdf34f2-3849-458f-9039-ee28fe0b998a","Type":"ContainerStarted","Data":"85b465f5398988ce51d136d5640f5c08fc6d89d9c75cbd473a26f946af6cfedb"} Dec 05 07:06:49 crc kubenswrapper[4780]: I1205 07:06:49.198656 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" event={"ID":"48088846-ad19-4031-b988-4825d14f503f","Type":"ContainerStarted","Data":"db7f8ad925a6d723dd40ba017c82e5d901b4fc6f29ab6babf1ed5e72d4d8a760"} Dec 05 07:06:49 crc kubenswrapper[4780]: I1205 07:06:49.198727 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:49 crc kubenswrapper[4780]: I1205 07:06:49.240343 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" podStartSLOduration=4.240328901 podStartE2EDuration="4.240328901s" podCreationTimestamp="2025-12-05 07:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:49.239137418 +0000 UTC m=+1243.308653770" watchObservedRunningTime="2025-12-05 07:06:49.240328901 +0000 UTC m=+1243.309845233" Dec 05 07:06:49 crc kubenswrapper[4780]: I1205 07:06:49.301423 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:06:49 crc kubenswrapper[4780]: I1205 07:06:49.368695 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:06:49 crc kubenswrapper[4780]: I1205 07:06:49.439718 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:06:50 crc kubenswrapper[4780]: I1205 07:06:50.171895 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f552588c-3a90-4a37-b289-602a88b75dee" path="/var/lib/kubelet/pods/f552588c-3a90-4a37-b289-602a88b75dee/volumes" Dec 05 07:06:50 crc kubenswrapper[4780]: I1205 07:06:50.226980 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cfdf34f2-3849-458f-9039-ee28fe0b998a","Type":"ContainerStarted","Data":"01cdd962fbdb541e613c40ab5bc3acbddcaf24bc6bebd743eb4c9108fa7d703e"} Dec 05 07:06:50 crc kubenswrapper[4780]: I1205 07:06:50.232495 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f2f67e3-1f38-4301-a83a-482a3cc49c0e","Type":"ContainerStarted","Data":"86402d57237cedbc83eebe7823581868f86eb873486ac8f19ab95fa5bcde30b8"} Dec 05 07:06:50 crc kubenswrapper[4780]: I1205 07:06:50.234782 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" containerName="glance-httpd" containerID="cri-o://86402d57237cedbc83eebe7823581868f86eb873486ac8f19ab95fa5bcde30b8" gracePeriod=30 Dec 05 07:06:50 crc kubenswrapper[4780]: I1205 07:06:50.235419 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" containerName="glance-log" containerID="cri-o://d7c55e89701ff02da48f0e1e974d5df5bb85eda65db9027fb6647858fe5c5b66" gracePeriod=30 Dec 05 07:06:50 crc kubenswrapper[4780]: I1205 07:06:50.260333 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.260312062 podStartE2EDuration="5.260312062s" podCreationTimestamp="2025-12-05 07:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:50.259119751 +0000 UTC m=+1244.328636083" watchObservedRunningTime="2025-12-05 07:06:50.260312062 +0000 UTC m=+1244.329828394" Dec 05 07:06:51 crc kubenswrapper[4780]: I1205 07:06:51.257516 4780 generic.go:334] "Generic (PLEG): container finished" podID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" containerID="86402d57237cedbc83eebe7823581868f86eb873486ac8f19ab95fa5bcde30b8" exitCode=0 Dec 05 07:06:51 crc kubenswrapper[4780]: I1205 07:06:51.258051 4780 generic.go:334] "Generic (PLEG): container finished" podID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" containerID="d7c55e89701ff02da48f0e1e974d5df5bb85eda65db9027fb6647858fe5c5b66" exitCode=143 Dec 05 07:06:51 crc kubenswrapper[4780]: I1205 07:06:51.257608 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f2f67e3-1f38-4301-a83a-482a3cc49c0e","Type":"ContainerDied","Data":"86402d57237cedbc83eebe7823581868f86eb873486ac8f19ab95fa5bcde30b8"} Dec 05 07:06:51 crc kubenswrapper[4780]: I1205 07:06:51.258235 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f2f67e3-1f38-4301-a83a-482a3cc49c0e","Type":"ContainerDied","Data":"d7c55e89701ff02da48f0e1e974d5df5bb85eda65db9027fb6647858fe5c5b66"} Dec 05 07:06:51 crc kubenswrapper[4780]: I1205 07:06:51.263911 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cfdf34f2-3849-458f-9039-ee28fe0b998a","Type":"ContainerStarted","Data":"2e233f628eef75ef2f9ea44caa91cab3e49e6d3447a9c45c48b5a6bd85af32e4"} Dec 05 07:06:51 crc kubenswrapper[4780]: I1205 07:06:51.264101 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cfdf34f2-3849-458f-9039-ee28fe0b998a" containerName="glance-log" containerID="cri-o://01cdd962fbdb541e613c40ab5bc3acbddcaf24bc6bebd743eb4c9108fa7d703e" gracePeriod=30 Dec 05 07:06:51 crc kubenswrapper[4780]: I1205 07:06:51.264518 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cfdf34f2-3849-458f-9039-ee28fe0b998a" containerName="glance-httpd" containerID="cri-o://2e233f628eef75ef2f9ea44caa91cab3e49e6d3447a9c45c48b5a6bd85af32e4" gracePeriod=30 Dec 05 07:06:51 crc kubenswrapper[4780]: I1205 07:06:51.328074 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.328050943 podStartE2EDuration="6.328050943s" podCreationTimestamp="2025-12-05 07:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:06:51.296201894 +0000 UTC m=+1245.365718236" watchObservedRunningTime="2025-12-05 07:06:51.328050943 +0000 UTC m=+1245.397567275" Dec 05 07:06:52 crc kubenswrapper[4780]: I1205 07:06:52.277151 4780 generic.go:334] "Generic (PLEG): container finished" podID="6fb0f320-1194-432e-80eb-c433df8a5257" containerID="748cfaf03a9adcb58c631cc8c43a8c70f9648ada77462ecef8c4354ad0cb4038" exitCode=0 Dec 05 07:06:52 crc kubenswrapper[4780]: I1205 07:06:52.277237 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jn4xw" event={"ID":"6fb0f320-1194-432e-80eb-c433df8a5257","Type":"ContainerDied","Data":"748cfaf03a9adcb58c631cc8c43a8c70f9648ada77462ecef8c4354ad0cb4038"} Dec 05 07:06:52 crc kubenswrapper[4780]: I1205 07:06:52.280936 4780 generic.go:334] "Generic (PLEG): container finished" podID="cfdf34f2-3849-458f-9039-ee28fe0b998a" containerID="2e233f628eef75ef2f9ea44caa91cab3e49e6d3447a9c45c48b5a6bd85af32e4" exitCode=0 Dec 05 07:06:52 crc kubenswrapper[4780]: I1205 07:06:52.280973 4780 generic.go:334] "Generic (PLEG): container finished" podID="cfdf34f2-3849-458f-9039-ee28fe0b998a" containerID="01cdd962fbdb541e613c40ab5bc3acbddcaf24bc6bebd743eb4c9108fa7d703e" exitCode=143 Dec 05 07:06:52 crc kubenswrapper[4780]: I1205 07:06:52.281033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cfdf34f2-3849-458f-9039-ee28fe0b998a","Type":"ContainerDied","Data":"2e233f628eef75ef2f9ea44caa91cab3e49e6d3447a9c45c48b5a6bd85af32e4"} Dec 05 07:06:52 crc kubenswrapper[4780]: I1205 07:06:52.281064 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cfdf34f2-3849-458f-9039-ee28fe0b998a","Type":"ContainerDied","Data":"01cdd962fbdb541e613c40ab5bc3acbddcaf24bc6bebd743eb4c9108fa7d703e"} Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.460351 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.643460 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.643540 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-scripts\") pod \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.643577 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-httpd-run\") pod \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.643613 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-logs\") pod \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.643830 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-combined-ca-bundle\") pod \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.643862 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78mrh\" (UniqueName: \"kubernetes.io/projected/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-kube-api-access-78mrh\") pod \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.643908 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-config-data\") pod \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\" (UID: \"1f2f67e3-1f38-4301-a83a-482a3cc49c0e\") " Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.645855 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1f2f67e3-1f38-4301-a83a-482a3cc49c0e" (UID: "1f2f67e3-1f38-4301-a83a-482a3cc49c0e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.651958 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-logs" (OuterVolumeSpecName: "logs") pod "1f2f67e3-1f38-4301-a83a-482a3cc49c0e" (UID: "1f2f67e3-1f38-4301-a83a-482a3cc49c0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.664128 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-kube-api-access-78mrh" (OuterVolumeSpecName: "kube-api-access-78mrh") pod "1f2f67e3-1f38-4301-a83a-482a3cc49c0e" (UID: "1f2f67e3-1f38-4301-a83a-482a3cc49c0e"). InnerVolumeSpecName "kube-api-access-78mrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.724020 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "1f2f67e3-1f38-4301-a83a-482a3cc49c0e" (UID: "1f2f67e3-1f38-4301-a83a-482a3cc49c0e"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.724200 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-scripts" (OuterVolumeSpecName: "scripts") pod "1f2f67e3-1f38-4301-a83a-482a3cc49c0e" (UID: "1f2f67e3-1f38-4301-a83a-482a3cc49c0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.727320 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-config-data" (OuterVolumeSpecName: "config-data") pod "1f2f67e3-1f38-4301-a83a-482a3cc49c0e" (UID: "1f2f67e3-1f38-4301-a83a-482a3cc49c0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.728284 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f2f67e3-1f38-4301-a83a-482a3cc49c0e" (UID: "1f2f67e3-1f38-4301-a83a-482a3cc49c0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.748534 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.748575 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78mrh\" (UniqueName: \"kubernetes.io/projected/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-kube-api-access-78mrh\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.748591 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.748624 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.748636 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.748648 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.748660 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f2f67e3-1f38-4301-a83a-482a3cc49c0e-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.774773 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 05 07:06:53 crc kubenswrapper[4780]: I1205 07:06:53.850501 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.306766 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f2f67e3-1f38-4301-a83a-482a3cc49c0e","Type":"ContainerDied","Data":"1f07f5ce929a213dfea6e8e8b6bbb022f7b9a401eaaf9d2599c5721e0e4193b5"} Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.307119 4780 scope.go:117] "RemoveContainer" containerID="86402d57237cedbc83eebe7823581868f86eb873486ac8f19ab95fa5bcde30b8" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.306846 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.335951 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.349112 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.361950 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:06:54 crc kubenswrapper[4780]: E1205 07:06:54.362386 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" containerName="glance-log" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362409 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" containerName="glance-log" Dec 05 07:06:54 crc kubenswrapper[4780]: E1205 07:06:54.362428 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f552588c-3a90-4a37-b289-602a88b75dee" containerName="init" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362448 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f552588c-3a90-4a37-b289-602a88b75dee" containerName="init" Dec 05 07:06:54 crc kubenswrapper[4780]: E1205 07:06:54.362462 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939a37e8-bd9d-4684-8596-6b6907ad309b" containerName="dnsmasq-dns" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362471 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="939a37e8-bd9d-4684-8596-6b6907ad309b" containerName="dnsmasq-dns" Dec 05 07:06:54 crc kubenswrapper[4780]: E1205 07:06:54.362480 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939a37e8-bd9d-4684-8596-6b6907ad309b" containerName="init" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362487 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="939a37e8-bd9d-4684-8596-6b6907ad309b" containerName="init" Dec 05 07:06:54 crc kubenswrapper[4780]: E1205 07:06:54.362501 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82143c4a-0d37-4c1a-b9a1-e00ef39a7229" containerName="init" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362509 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="82143c4a-0d37-4c1a-b9a1-e00ef39a7229" containerName="init" Dec 05 07:06:54 crc kubenswrapper[4780]: E1205 07:06:54.362525 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" containerName="glance-httpd" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362532 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" containerName="glance-httpd" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362700 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" containerName="glance-log" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362716 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" containerName="glance-httpd" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362728 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="939a37e8-bd9d-4684-8596-6b6907ad309b" containerName="dnsmasq-dns" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362744 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f552588c-3a90-4a37-b289-602a88b75dee" containerName="init" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.362753 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="82143c4a-0d37-4c1a-b9a1-e00ef39a7229" containerName="init" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.363729 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.365744 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.380841 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.465114 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbsc4\" (UniqueName: \"kubernetes.io/projected/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-kube-api-access-bbsc4\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.465317 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.465362 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.465427 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.465476 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.465527 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-logs\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.465637 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.567310 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbsc4\" (UniqueName: \"kubernetes.io/projected/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-kube-api-access-bbsc4\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.567403 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.567426 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.567450 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.568095 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.568696 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.568760 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-logs\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.568809 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.570263 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-logs\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.571772 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.574946 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.575608 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.589873 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbsc4\" (UniqueName: \"kubernetes.io/projected/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-kube-api-access-bbsc4\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.592383 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.618677 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.680187 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:06:54 crc kubenswrapper[4780]: I1205 07:06:54.708272 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:06:56 crc kubenswrapper[4780]: I1205 07:06:56.018050 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:06:56 crc kubenswrapper[4780]: I1205 07:06:56.085151 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784d65c867-9pzr6"] Dec 05 07:06:56 crc kubenswrapper[4780]: I1205 07:06:56.085382 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" podUID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerName="dnsmasq-dns" containerID="cri-o://d4e673a5d0d5b95b4e087e873639a0e910f7ecaf75ab1e36768194122864027c" gracePeriod=10 Dec 05 07:06:56 crc kubenswrapper[4780]: I1205 07:06:56.179183 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2f67e3-1f38-4301-a83a-482a3cc49c0e" path="/var/lib/kubelet/pods/1f2f67e3-1f38-4301-a83a-482a3cc49c0e/volumes" Dec 05 07:06:56 crc kubenswrapper[4780]: I1205 07:06:56.342153 4780 generic.go:334] "Generic (PLEG): container finished" podID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerID="d4e673a5d0d5b95b4e087e873639a0e910f7ecaf75ab1e36768194122864027c" exitCode=0 Dec 05 07:06:56 crc kubenswrapper[4780]: I1205 07:06:56.342218 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" event={"ID":"54e3232c-b0fc-4759-b08c-551fbdfc4c5f","Type":"ContainerDied","Data":"d4e673a5d0d5b95b4e087e873639a0e910f7ecaf75ab1e36768194122864027c"} Dec 05 07:06:56 crc kubenswrapper[4780]: I1205 07:06:56.790352 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" podUID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Dec 05 07:06:59 crc kubenswrapper[4780]: I1205 07:06:59.908196 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:06:59 crc kubenswrapper[4780]: I1205 07:06:59.909598 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:06:59 crc kubenswrapper[4780]: I1205 07:06:59.909716 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:06:59 crc kubenswrapper[4780]: I1205 07:06:59.910425 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37d5186fb5eae115758d67047412cfc1def8a21875c148dddc28958b2a44062b"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:06:59 crc kubenswrapper[4780]: I1205 07:06:59.910538 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://37d5186fb5eae115758d67047412cfc1def8a21875c148dddc28958b2a44062b" gracePeriod=600 Dec 05 07:07:01 crc kubenswrapper[4780]: E1205 07:07:01.354473 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:43a24796dabde68270dbfefa107205e173fdd6a0dc701502858cadbede69da31" Dec 05 07:07:01 crc kubenswrapper[4780]: E1205 07:07:01.355039 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:43a24796dabde68270dbfefa107205e173fdd6a0dc701502858cadbede69da31,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67dhc5h56chddh656h5d9hddh8fh68dh67bh648h58bh5f6h645h57dhbch77h5c6h7fh9dh64bh567h644h665h598hbfh5dbh54chf8h585h564h56dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ldj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(995ff9d4-9da8-471e-a696-aefb4ebbf473): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:07:01 crc kubenswrapper[4780]: I1205 07:07:01.387416 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="37d5186fb5eae115758d67047412cfc1def8a21875c148dddc28958b2a44062b" exitCode=0 Dec 05 07:07:01 crc kubenswrapper[4780]: I1205 07:07:01.387476 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"37d5186fb5eae115758d67047412cfc1def8a21875c148dddc28958b2a44062b"} Dec 05 07:07:03 crc kubenswrapper[4780]: E1205 07:07:03.025831 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:f24234939afca841e46ea4d17bec959b63705ab0e75476465e777d44905c5f1b" Dec 05 07:07:03 crc kubenswrapper[4780]: E1205 07:07:03.026327 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:f24234939afca841e46ea4d17bec959b63705ab0e75476465e777d44905c5f1b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ds5h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-mqbgb_openstack(44428dc2-af95-4541-b700-7ac3b81164d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:07:03 crc kubenswrapper[4780]: E1205 07:07:03.027499 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-mqbgb" podUID="44428dc2-af95-4541-b700-7ac3b81164d5" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.104135 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.249131 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-fernet-keys\") pod \"6fb0f320-1194-432e-80eb-c433df8a5257\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.249238 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-combined-ca-bundle\") pod \"6fb0f320-1194-432e-80eb-c433df8a5257\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.249270 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-config-data\") pod \"6fb0f320-1194-432e-80eb-c433df8a5257\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.249351 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-credential-keys\") pod \"6fb0f320-1194-432e-80eb-c433df8a5257\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.249402 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmcjp\" (UniqueName: \"kubernetes.io/projected/6fb0f320-1194-432e-80eb-c433df8a5257-kube-api-access-xmcjp\") pod \"6fb0f320-1194-432e-80eb-c433df8a5257\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.249443 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-scripts\") pod \"6fb0f320-1194-432e-80eb-c433df8a5257\" (UID: \"6fb0f320-1194-432e-80eb-c433df8a5257\") " Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.256799 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb0f320-1194-432e-80eb-c433df8a5257-kube-api-access-xmcjp" (OuterVolumeSpecName: "kube-api-access-xmcjp") pod "6fb0f320-1194-432e-80eb-c433df8a5257" (UID: "6fb0f320-1194-432e-80eb-c433df8a5257"). InnerVolumeSpecName "kube-api-access-xmcjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.272737 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-scripts" (OuterVolumeSpecName: "scripts") pod "6fb0f320-1194-432e-80eb-c433df8a5257" (UID: "6fb0f320-1194-432e-80eb-c433df8a5257"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.276835 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6fb0f320-1194-432e-80eb-c433df8a5257" (UID: "6fb0f320-1194-432e-80eb-c433df8a5257"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.276947 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6fb0f320-1194-432e-80eb-c433df8a5257" (UID: "6fb0f320-1194-432e-80eb-c433df8a5257"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.292561 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fb0f320-1194-432e-80eb-c433df8a5257" (UID: "6fb0f320-1194-432e-80eb-c433df8a5257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.299096 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-config-data" (OuterVolumeSpecName: "config-data") pod "6fb0f320-1194-432e-80eb-c433df8a5257" (UID: "6fb0f320-1194-432e-80eb-c433df8a5257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.351453 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.351487 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.351498 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.351510 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.351518 4780 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6fb0f320-1194-432e-80eb-c433df8a5257-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.351528 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmcjp\" (UniqueName: \"kubernetes.io/projected/6fb0f320-1194-432e-80eb-c433df8a5257-kube-api-access-xmcjp\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.406027 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jn4xw" event={"ID":"6fb0f320-1194-432e-80eb-c433df8a5257","Type":"ContainerDied","Data":"1d4c5b29be40a792aedac87211ae536946f07c2caa4b4650c39d66b444d5ef7b"} Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.406078 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d4c5b29be40a792aedac87211ae536946f07c2caa4b4650c39d66b444d5ef7b" Dec 05 07:07:03 crc kubenswrapper[4780]: I1205 07:07:03.406921 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jn4xw" Dec 05 07:07:03 crc kubenswrapper[4780]: E1205 07:07:03.407943 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:f24234939afca841e46ea4d17bec959b63705ab0e75476465e777d44905c5f1b\\\"\"" pod="openstack/placement-db-sync-mqbgb" podUID="44428dc2-af95-4541-b700-7ac3b81164d5" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.209241 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jn4xw"] Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.223850 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jn4xw"] Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.285392 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-j7ntf"] Dec 05 07:07:04 crc kubenswrapper[4780]: E1205 07:07:04.285815 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb0f320-1194-432e-80eb-c433df8a5257" containerName="keystone-bootstrap" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.285837 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb0f320-1194-432e-80eb-c433df8a5257" containerName="keystone-bootstrap" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.286092 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb0f320-1194-432e-80eb-c433df8a5257" containerName="keystone-bootstrap" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.286825 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.289994 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.290091 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8svcl" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.290189 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.290267 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.290297 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.300079 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j7ntf"] Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.366838 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqnw5\" (UniqueName: \"kubernetes.io/projected/e53574ab-8107-4d3d-a695-d64db3bbb908-kube-api-access-kqnw5\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.366987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-combined-ca-bundle\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.367030 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-fernet-keys\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.367072 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-scripts\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.367093 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-config-data\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.367111 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-credential-keys\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.468903 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-fernet-keys\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.468982 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-scripts\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.469012 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-config-data\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.469032 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-credential-keys\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.469074 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqnw5\" (UniqueName: \"kubernetes.io/projected/e53574ab-8107-4d3d-a695-d64db3bbb908-kube-api-access-kqnw5\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.469137 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-combined-ca-bundle\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.474698 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-config-data\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.475312 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-combined-ca-bundle\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.475360 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-fernet-keys\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.476004 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-credential-keys\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.476215 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-scripts\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.485837 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqnw5\" (UniqueName: \"kubernetes.io/projected/e53574ab-8107-4d3d-a695-d64db3bbb908-kube-api-access-kqnw5\") pod \"keystone-bootstrap-j7ntf\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:04 crc kubenswrapper[4780]: I1205 07:07:04.611326 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:06 crc kubenswrapper[4780]: I1205 07:07:06.153074 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb0f320-1194-432e-80eb-c433df8a5257" path="/var/lib/kubelet/pods/6fb0f320-1194-432e-80eb-c433df8a5257/volumes" Dec 05 07:07:06 crc kubenswrapper[4780]: I1205 07:07:06.791172 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" podUID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.585973 4780 scope.go:117] "RemoveContainer" containerID="d7c55e89701ff02da48f0e1e974d5df5bb85eda65db9027fb6647858fe5c5b66" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.665645 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.672270 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.792455 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" podUID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.792609 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.810574 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-config\") pod \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.810639 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-config-data\") pod \"cfdf34f2-3849-458f-9039-ee28fe0b998a\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.810739 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-scripts\") pod \"cfdf34f2-3849-458f-9039-ee28fe0b998a\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.810829 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-httpd-run\") pod \"cfdf34f2-3849-458f-9039-ee28fe0b998a\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.810941 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-844n6\" (UniqueName: \"kubernetes.io/projected/cfdf34f2-3849-458f-9039-ee28fe0b998a-kube-api-access-844n6\") pod \"cfdf34f2-3849-458f-9039-ee28fe0b998a\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.811010 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-logs\") pod \"cfdf34f2-3849-458f-9039-ee28fe0b998a\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.811043 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-dns-svc\") pod \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.811085 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cfdf34f2-3849-458f-9039-ee28fe0b998a\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.811201 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-sb\") pod \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.811245 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-nb\") pod \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.811272 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-combined-ca-bundle\") pod \"cfdf34f2-3849-458f-9039-ee28fe0b998a\" (UID: \"cfdf34f2-3849-458f-9039-ee28fe0b998a\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.811325 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsnlf\" (UniqueName: \"kubernetes.io/projected/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-kube-api-access-jsnlf\") pod \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\" (UID: \"54e3232c-b0fc-4759-b08c-551fbdfc4c5f\") " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.813466 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-logs" (OuterVolumeSpecName: "logs") pod "cfdf34f2-3849-458f-9039-ee28fe0b998a" (UID: "cfdf34f2-3849-458f-9039-ee28fe0b998a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.813828 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cfdf34f2-3849-458f-9039-ee28fe0b998a" (UID: "cfdf34f2-3849-458f-9039-ee28fe0b998a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.822384 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-kube-api-access-jsnlf" (OuterVolumeSpecName: "kube-api-access-jsnlf") pod "54e3232c-b0fc-4759-b08c-551fbdfc4c5f" (UID: "54e3232c-b0fc-4759-b08c-551fbdfc4c5f"). InnerVolumeSpecName "kube-api-access-jsnlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.827289 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfdf34f2-3849-458f-9039-ee28fe0b998a-kube-api-access-844n6" (OuterVolumeSpecName: "kube-api-access-844n6") pod "cfdf34f2-3849-458f-9039-ee28fe0b998a" (UID: "cfdf34f2-3849-458f-9039-ee28fe0b998a"). InnerVolumeSpecName "kube-api-access-844n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.828999 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "cfdf34f2-3849-458f-9039-ee28fe0b998a" (UID: "cfdf34f2-3849-458f-9039-ee28fe0b998a"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.842049 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-scripts" (OuterVolumeSpecName: "scripts") pod "cfdf34f2-3849-458f-9039-ee28fe0b998a" (UID: "cfdf34f2-3849-458f-9039-ee28fe0b998a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.858950 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfdf34f2-3849-458f-9039-ee28fe0b998a" (UID: "cfdf34f2-3849-458f-9039-ee28fe0b998a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.877586 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54e3232c-b0fc-4759-b08c-551fbdfc4c5f" (UID: "54e3232c-b0fc-4759-b08c-551fbdfc4c5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.877768 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "54e3232c-b0fc-4759-b08c-551fbdfc4c5f" (UID: "54e3232c-b0fc-4759-b08c-551fbdfc4c5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.879019 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "54e3232c-b0fc-4759-b08c-551fbdfc4c5f" (UID: "54e3232c-b0fc-4759-b08c-551fbdfc4c5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.894453 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-config" (OuterVolumeSpecName: "config") pod "54e3232c-b0fc-4759-b08c-551fbdfc4c5f" (UID: "54e3232c-b0fc-4759-b08c-551fbdfc4c5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.895980 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-config-data" (OuterVolumeSpecName: "config-data") pod "cfdf34f2-3849-458f-9039-ee28fe0b998a" (UID: "cfdf34f2-3849-458f-9039-ee28fe0b998a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913191 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913222 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913240 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913252 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913266 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-844n6\" (UniqueName: \"kubernetes.io/projected/cfdf34f2-3849-458f-9039-ee28fe0b998a-kube-api-access-844n6\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913281 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfdf34f2-3849-458f-9039-ee28fe0b998a-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913292 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913334 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913347 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913359 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913373 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfdf34f2-3849-458f-9039-ee28fe0b998a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.913386 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsnlf\" (UniqueName: \"kubernetes.io/projected/54e3232c-b0fc-4759-b08c-551fbdfc4c5f-kube-api-access-jsnlf\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:11 crc kubenswrapper[4780]: I1205 07:07:11.934836 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.014676 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.487286 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" event={"ID":"54e3232c-b0fc-4759-b08c-551fbdfc4c5f","Type":"ContainerDied","Data":"74bf4e1f872c4c69f5db5de3ef62461a0049436e24c106030f65d562a1c9a05f"} Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.487444 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d65c867-9pzr6" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.491652 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cfdf34f2-3849-458f-9039-ee28fe0b998a","Type":"ContainerDied","Data":"85b465f5398988ce51d136d5640f5c08fc6d89d9c75cbd473a26f946af6cfedb"} Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.491814 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.515932 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784d65c867-9pzr6"] Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.527925 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784d65c867-9pzr6"] Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.535264 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.544381 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.563448 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:07:12 crc kubenswrapper[4780]: E1205 07:07:12.563842 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerName="init" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.563855 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerName="init" Dec 05 07:07:12 crc kubenswrapper[4780]: E1205 07:07:12.563874 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfdf34f2-3849-458f-9039-ee28fe0b998a" containerName="glance-log" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.563899 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfdf34f2-3849-458f-9039-ee28fe0b998a" containerName="glance-log" Dec 05 07:07:12 crc kubenswrapper[4780]: E1205 07:07:12.563921 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfdf34f2-3849-458f-9039-ee28fe0b998a" containerName="glance-httpd" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.563928 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfdf34f2-3849-458f-9039-ee28fe0b998a" containerName="glance-httpd" Dec 05 07:07:12 crc kubenswrapper[4780]: E1205 07:07:12.563948 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerName="dnsmasq-dns" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.563955 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerName="dnsmasq-dns" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.564146 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfdf34f2-3849-458f-9039-ee28fe0b998a" containerName="glance-httpd" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.564165 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfdf34f2-3849-458f-9039-ee28fe0b998a" containerName="glance-log" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.564177 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" containerName="dnsmasq-dns" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.565129 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.568418 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.570405 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.595341 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.632167 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-logs\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.632209 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.632235 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.632287 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.632313 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sbvc\" (UniqueName: \"kubernetes.io/projected/5738fd8e-a30a-4470-8bf3-47c00286f574-kube-api-access-6sbvc\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.632352 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.632403 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.632421 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.733495 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.733537 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.733570 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-logs\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.733586 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.733608 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.733655 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.733678 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sbvc\" (UniqueName: \"kubernetes.io/projected/5738fd8e-a30a-4470-8bf3-47c00286f574-kube-api-access-6sbvc\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.733718 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.733870 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.735840 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.736323 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-logs\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.741035 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.741679 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.741741 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.750026 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.761139 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sbvc\" (UniqueName: \"kubernetes.io/projected/5738fd8e-a30a-4470-8bf3-47c00286f574-kube-api-access-6sbvc\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.770787 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.812497 4780 scope.go:117] "RemoveContainer" containerID="2cffc0fbafe881f6d1cc6fb53dc07f8d5a3aeb1ce491c38fa67b6155bb864e41" Dec 05 07:07:12 crc kubenswrapper[4780]: E1205 07:07:12.850085 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:7a2056615520e272bae43ec3f34e2ba7a92c1d364b8d9106b53bd694619fc9c2" Dec 05 07:07:12 crc kubenswrapper[4780]: E1205 07:07:12.850306 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:7a2056615520e272bae43ec3f34e2ba7a92c1d364b8d9106b53bd694619fc9c2,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-742f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7kv9n_openstack(c67223f0-4471-424c-b74d-886cec703c8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 07:07:12 crc kubenswrapper[4780]: E1205 07:07:12.851471 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7kv9n" podUID="c67223f0-4471-424c-b74d-886cec703c8a" Dec 05 07:07:12 crc kubenswrapper[4780]: I1205 07:07:12.896071 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.216143 4780 scope.go:117] "RemoveContainer" containerID="d4e673a5d0d5b95b4e087e873639a0e910f7ecaf75ab1e36768194122864027c" Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.247559 4780 scope.go:117] "RemoveContainer" containerID="4da3382c1cc78ad5ffd7d73cf7eb3aa0c2f34d72198f18e928a8b6c08c890fe2" Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.293974 4780 scope.go:117] "RemoveContainer" containerID="2e233f628eef75ef2f9ea44caa91cab3e49e6d3447a9c45c48b5a6bd85af32e4" Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.327021 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.372105 4780 scope.go:117] "RemoveContainer" containerID="01cdd962fbdb541e613c40ab5bc3acbddcaf24bc6bebd743eb4c9108fa7d703e" Dec 05 07:07:13 crc kubenswrapper[4780]: W1205 07:07:13.381489 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb431f9a_d6b2_49e5_8d78_ca2ff1d375e4.slice/crio-e77f48fdf04a17af2c4631e7ba933264c21206ecf85f835bcfa934f4acc53b9e WatchSource:0}: Error finding container e77f48fdf04a17af2c4631e7ba933264c21206ecf85f835bcfa934f4acc53b9e: Status 404 returned error can't find the container with id e77f48fdf04a17af2c4631e7ba933264c21206ecf85f835bcfa934f4acc53b9e Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.511553 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5f9r8" event={"ID":"a154e1e8-52d0-43c2-8685-cd8769db58d0","Type":"ContainerStarted","Data":"cad38a88de1d1c3785a9becf732629324425632ba07e9746939e332ec95a2266"} Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.519168 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"d9dc2d92a1d6ba1ee75bf54b5eb7456372ba33add1817df5a2c1354bbca5e757"} Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.522288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4","Type":"ContainerStarted","Data":"e77f48fdf04a17af2c4631e7ba933264c21206ecf85f835bcfa934f4acc53b9e"} Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.524299 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995ff9d4-9da8-471e-a696-aefb4ebbf473","Type":"ContainerStarted","Data":"5a330b49dc443f51f2a618fb3f55aadf083dcb8a1047463013ce92f795d30da6"} Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.533055 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-5f9r8" podStartSLOduration=2.956826922 podStartE2EDuration="28.533033425s" podCreationTimestamp="2025-12-05 07:06:45 +0000 UTC" firstStartedPulling="2025-12-05 07:06:47.23945077 +0000 UTC m=+1241.308967092" lastFinishedPulling="2025-12-05 07:07:12.815657263 +0000 UTC m=+1266.885173595" observedRunningTime="2025-12-05 07:07:13.528313708 +0000 UTC m=+1267.597830040" watchObservedRunningTime="2025-12-05 07:07:13.533033425 +0000 UTC m=+1267.602549757" Dec 05 07:07:13 crc kubenswrapper[4780]: E1205 07:07:13.549581 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:7a2056615520e272bae43ec3f34e2ba7a92c1d364b8d9106b53bd694619fc9c2\\\"\"" pod="openstack/cinder-db-sync-7kv9n" podUID="c67223f0-4471-424c-b74d-886cec703c8a" Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.573491 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j7ntf"] Dec 05 07:07:13 crc kubenswrapper[4780]: W1205 07:07:13.577034 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode53574ab_8107_4d3d_a695_d64db3bbb908.slice/crio-ac93fcb08156b25d9c5b8dbeb6a8560d02302cda4da4a9f8b9a75cceeae4eec5 WatchSource:0}: Error finding container ac93fcb08156b25d9c5b8dbeb6a8560d02302cda4da4a9f8b9a75cceeae4eec5: Status 404 returned error can't find the container with id ac93fcb08156b25d9c5b8dbeb6a8560d02302cda4da4a9f8b9a75cceeae4eec5 Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.585804 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 07:07:13 crc kubenswrapper[4780]: I1205 07:07:13.764744 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:07:13 crc kubenswrapper[4780]: W1205 07:07:13.793598 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5738fd8e_a30a_4470_8bf3_47c00286f574.slice/crio-7776cd34a8d8963b152da41c2216abf5cea97fe65c74cb98f9e7290b0c0bf392 WatchSource:0}: Error finding container 7776cd34a8d8963b152da41c2216abf5cea97fe65c74cb98f9e7290b0c0bf392: Status 404 returned error can't find the container with id 7776cd34a8d8963b152da41c2216abf5cea97fe65c74cb98f9e7290b0c0bf392 Dec 05 07:07:14 crc kubenswrapper[4780]: I1205 07:07:14.170470 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54e3232c-b0fc-4759-b08c-551fbdfc4c5f" path="/var/lib/kubelet/pods/54e3232c-b0fc-4759-b08c-551fbdfc4c5f/volumes" Dec 05 07:07:14 crc kubenswrapper[4780]: I1205 07:07:14.172137 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfdf34f2-3849-458f-9039-ee28fe0b998a" path="/var/lib/kubelet/pods/cfdf34f2-3849-458f-9039-ee28fe0b998a/volumes" Dec 05 07:07:14 crc kubenswrapper[4780]: I1205 07:07:14.558831 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j7ntf" event={"ID":"e53574ab-8107-4d3d-a695-d64db3bbb908","Type":"ContainerStarted","Data":"994f56fa937e545b930e9da333108023bc7033e8db6e0fff54778e83bdef084c"} Dec 05 07:07:14 crc kubenswrapper[4780]: I1205 07:07:14.559163 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j7ntf" event={"ID":"e53574ab-8107-4d3d-a695-d64db3bbb908","Type":"ContainerStarted","Data":"ac93fcb08156b25d9c5b8dbeb6a8560d02302cda4da4a9f8b9a75cceeae4eec5"} Dec 05 07:07:14 crc kubenswrapper[4780]: I1205 07:07:14.564991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4","Type":"ContainerStarted","Data":"5d3f3495084a9ecdf86129ca76a1eb3a6a32ef24686a63cd2607d3b48ddb2d48"} Dec 05 07:07:14 crc kubenswrapper[4780]: I1205 07:07:14.587036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5738fd8e-a30a-4470-8bf3-47c00286f574","Type":"ContainerStarted","Data":"2ab4dd976fe70a55ae448cbd52e5d5965f6936762e89b58a57dd0e7681870ad2"} Dec 05 07:07:14 crc kubenswrapper[4780]: I1205 07:07:14.587113 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5738fd8e-a30a-4470-8bf3-47c00286f574","Type":"ContainerStarted","Data":"7776cd34a8d8963b152da41c2216abf5cea97fe65c74cb98f9e7290b0c0bf392"} Dec 05 07:07:14 crc kubenswrapper[4780]: I1205 07:07:14.601138 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-j7ntf" podStartSLOduration=10.601119705 podStartE2EDuration="10.601119705s" podCreationTimestamp="2025-12-05 07:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:14.589581074 +0000 UTC m=+1268.659097396" watchObservedRunningTime="2025-12-05 07:07:14.601119705 +0000 UTC m=+1268.670636037" Dec 05 07:07:15 crc kubenswrapper[4780]: I1205 07:07:15.583199 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" containerName="glance-log" containerID="cri-o://5d3f3495084a9ecdf86129ca76a1eb3a6a32ef24686a63cd2607d3b48ddb2d48" gracePeriod=30 Dec 05 07:07:15 crc kubenswrapper[4780]: I1205 07:07:15.583412 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4","Type":"ContainerStarted","Data":"87fee980f30c5a4569134ce0928eb991814a467f5b59fb9ae808da6638ae92f7"} Dec 05 07:07:15 crc kubenswrapper[4780]: I1205 07:07:15.583783 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" containerName="glance-httpd" containerID="cri-o://87fee980f30c5a4569134ce0928eb991814a467f5b59fb9ae808da6638ae92f7" gracePeriod=30 Dec 05 07:07:15 crc kubenswrapper[4780]: I1205 07:07:15.618712 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.618687712 podStartE2EDuration="21.618687712s" podCreationTimestamp="2025-12-05 07:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:15.610555732 +0000 UTC m=+1269.680072085" watchObservedRunningTime="2025-12-05 07:07:15.618687712 +0000 UTC m=+1269.688204044" Dec 05 07:07:16 crc kubenswrapper[4780]: I1205 07:07:16.601741 4780 generic.go:334] "Generic (PLEG): container finished" podID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" containerID="87fee980f30c5a4569134ce0928eb991814a467f5b59fb9ae808da6638ae92f7" exitCode=0 Dec 05 07:07:16 crc kubenswrapper[4780]: I1205 07:07:16.602082 4780 generic.go:334] "Generic (PLEG): container finished" podID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" containerID="5d3f3495084a9ecdf86129ca76a1eb3a6a32ef24686a63cd2607d3b48ddb2d48" exitCode=143 Dec 05 07:07:16 crc kubenswrapper[4780]: I1205 07:07:16.601836 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4","Type":"ContainerDied","Data":"87fee980f30c5a4569134ce0928eb991814a467f5b59fb9ae808da6638ae92f7"} Dec 05 07:07:16 crc kubenswrapper[4780]: I1205 07:07:16.602201 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4","Type":"ContainerDied","Data":"5d3f3495084a9ecdf86129ca76a1eb3a6a32ef24686a63cd2607d3b48ddb2d48"} Dec 05 07:07:16 crc kubenswrapper[4780]: I1205 07:07:16.605017 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5738fd8e-a30a-4470-8bf3-47c00286f574","Type":"ContainerStarted","Data":"bd58ca4bcdb9ce3889888c2869ac2ebaa3277c1937950ee6e41b129e6d72bae6"} Dec 05 07:07:16 crc kubenswrapper[4780]: I1205 07:07:16.639857 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.639788944 podStartE2EDuration="4.639788944s" podCreationTimestamp="2025-12-05 07:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:16.625280753 +0000 UTC m=+1270.694797085" watchObservedRunningTime="2025-12-05 07:07:16.639788944 +0000 UTC m=+1270.709305276" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.282662 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.392241 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-config-data\") pod \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.392387 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-httpd-run\") pod \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.392428 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-scripts\") pod \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.392497 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbsc4\" (UniqueName: \"kubernetes.io/projected/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-kube-api-access-bbsc4\") pod \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.392554 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-combined-ca-bundle\") pod \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.392599 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-logs\") pod \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.392643 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\" (UID: \"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4\") " Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.393197 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-logs" (OuterVolumeSpecName: "logs") pod "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" (UID: "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.393516 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" (UID: "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.397096 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" (UID: "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.397894 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-scripts" (OuterVolumeSpecName: "scripts") pod "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" (UID: "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.398437 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-kube-api-access-bbsc4" (OuterVolumeSpecName: "kube-api-access-bbsc4") pod "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" (UID: "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4"). InnerVolumeSpecName "kube-api-access-bbsc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.427847 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" (UID: "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.449625 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-config-data" (OuterVolumeSpecName: "config-data") pod "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" (UID: "bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.494516 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.494572 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.494614 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.494628 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.494639 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.494649 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.494660 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbsc4\" (UniqueName: \"kubernetes.io/projected/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4-kube-api-access-bbsc4\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.513460 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.596404 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.658002 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995ff9d4-9da8-471e-a696-aefb4ebbf473","Type":"ContainerStarted","Data":"c51d113e6b9bf551f86be82d2e03a05389e6b696f9ad164dfbb8351b5090597f"} Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.660981 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mqbgb" event={"ID":"44428dc2-af95-4541-b700-7ac3b81164d5","Type":"ContainerStarted","Data":"e110e2d54c09d0e88283f90c73063aeb81b41224ace45de198d5032e23abeb2e"} Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.665504 4780 generic.go:334] "Generic (PLEG): container finished" podID="e53574ab-8107-4d3d-a695-d64db3bbb908" containerID="994f56fa937e545b930e9da333108023bc7033e8db6e0fff54778e83bdef084c" exitCode=0 Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.665564 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j7ntf" event={"ID":"e53574ab-8107-4d3d-a695-d64db3bbb908","Type":"ContainerDied","Data":"994f56fa937e545b930e9da333108023bc7033e8db6e0fff54778e83bdef084c"} Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.668563 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4","Type":"ContainerDied","Data":"e77f48fdf04a17af2c4631e7ba933264c21206ecf85f835bcfa934f4acc53b9e"} Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.668620 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.668623 4780 scope.go:117] "RemoveContainer" containerID="87fee980f30c5a4569134ce0928eb991814a467f5b59fb9ae808da6638ae92f7" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.680119 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mqbgb" podStartSLOduration=2.541892556 podStartE2EDuration="35.680100555s" podCreationTimestamp="2025-12-05 07:06:45 +0000 UTC" firstStartedPulling="2025-12-05 07:06:47.222700899 +0000 UTC m=+1241.292217241" lastFinishedPulling="2025-12-05 07:07:20.360908908 +0000 UTC m=+1274.430425240" observedRunningTime="2025-12-05 07:07:20.676809806 +0000 UTC m=+1274.746326158" watchObservedRunningTime="2025-12-05 07:07:20.680100555 +0000 UTC m=+1274.749616887" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.715622 4780 scope.go:117] "RemoveContainer" containerID="5d3f3495084a9ecdf86129ca76a1eb3a6a32ef24686a63cd2607d3b48ddb2d48" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.730809 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.754164 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.770961 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:07:20 crc kubenswrapper[4780]: E1205 07:07:20.771289 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" containerName="glance-httpd" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.771301 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" containerName="glance-httpd" Dec 05 07:07:20 crc kubenswrapper[4780]: E1205 07:07:20.771328 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" containerName="glance-log" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.771336 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" containerName="glance-log" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.771521 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" containerName="glance-log" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.771532 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" containerName="glance-httpd" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.772401 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.780480 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.802837 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.803137 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.904316 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-logs\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.904397 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.904433 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.904452 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.904471 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.904616 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.904679 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vsdf\" (UniqueName: \"kubernetes.io/projected/83acdff4-818d-4715-875b-0851c4fa04f0-kube-api-access-9vsdf\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:20 crc kubenswrapper[4780]: I1205 07:07:20.904713 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.005924 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-logs\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.006006 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.006044 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.006068 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.006090 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.006113 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.006138 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vsdf\" (UniqueName: \"kubernetes.io/projected/83acdff4-818d-4715-875b-0851c4fa04f0-kube-api-access-9vsdf\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.006156 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.006581 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-logs\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.007075 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.007525 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.011720 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.011760 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-scripts\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.013766 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-config-data\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.015557 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.024428 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vsdf\" (UniqueName: \"kubernetes.io/projected/83acdff4-818d-4715-875b-0851c4fa04f0-kube-api-access-9vsdf\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.035391 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.132139 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.466191 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:07:21 crc kubenswrapper[4780]: I1205 07:07:21.699376 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83acdff4-818d-4715-875b-0851c4fa04f0","Type":"ContainerStarted","Data":"e21405c9e8e46ab829406f66f28af255b53885faf35febac268bbe84c118de4c"} Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.124114 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.157422 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4" path="/var/lib/kubelet/pods/bb431f9a-d6b2-49e5-8d78-ca2ff1d375e4/volumes" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.473638 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqnw5\" (UniqueName: \"kubernetes.io/projected/e53574ab-8107-4d3d-a695-d64db3bbb908-kube-api-access-kqnw5\") pod \"e53574ab-8107-4d3d-a695-d64db3bbb908\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.474100 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-fernet-keys\") pod \"e53574ab-8107-4d3d-a695-d64db3bbb908\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.474144 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-credential-keys\") pod \"e53574ab-8107-4d3d-a695-d64db3bbb908\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.474181 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-config-data\") pod \"e53574ab-8107-4d3d-a695-d64db3bbb908\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.474216 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-scripts\") pod \"e53574ab-8107-4d3d-a695-d64db3bbb908\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.474818 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-combined-ca-bundle\") pod \"e53574ab-8107-4d3d-a695-d64db3bbb908\" (UID: \"e53574ab-8107-4d3d-a695-d64db3bbb908\") " Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.478591 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-scripts" (OuterVolumeSpecName: "scripts") pod "e53574ab-8107-4d3d-a695-d64db3bbb908" (UID: "e53574ab-8107-4d3d-a695-d64db3bbb908"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.479147 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e53574ab-8107-4d3d-a695-d64db3bbb908" (UID: "e53574ab-8107-4d3d-a695-d64db3bbb908"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.479173 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53574ab-8107-4d3d-a695-d64db3bbb908-kube-api-access-kqnw5" (OuterVolumeSpecName: "kube-api-access-kqnw5") pod "e53574ab-8107-4d3d-a695-d64db3bbb908" (UID: "e53574ab-8107-4d3d-a695-d64db3bbb908"). InnerVolumeSpecName "kube-api-access-kqnw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.479175 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e53574ab-8107-4d3d-a695-d64db3bbb908" (UID: "e53574ab-8107-4d3d-a695-d64db3bbb908"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.507571 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e53574ab-8107-4d3d-a695-d64db3bbb908" (UID: "e53574ab-8107-4d3d-a695-d64db3bbb908"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.508565 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-config-data" (OuterVolumeSpecName: "config-data") pod "e53574ab-8107-4d3d-a695-d64db3bbb908" (UID: "e53574ab-8107-4d3d-a695-d64db3bbb908"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.576965 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.577005 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqnw5\" (UniqueName: \"kubernetes.io/projected/e53574ab-8107-4d3d-a695-d64db3bbb908-kube-api-access-kqnw5\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.577020 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.577034 4780 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.577047 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.577057 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e53574ab-8107-4d3d-a695-d64db3bbb908-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.715620 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83acdff4-818d-4715-875b-0851c4fa04f0","Type":"ContainerStarted","Data":"7d0604fcc8d102de413cadab5cfc7ff08b472baf10c9737ead45b9bdc22e6270"} Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.722461 4780 generic.go:334] "Generic (PLEG): container finished" podID="a154e1e8-52d0-43c2-8685-cd8769db58d0" containerID="cad38a88de1d1c3785a9becf732629324425632ba07e9746939e332ec95a2266" exitCode=0 Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.722529 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5f9r8" event={"ID":"a154e1e8-52d0-43c2-8685-cd8769db58d0","Type":"ContainerDied","Data":"cad38a88de1d1c3785a9becf732629324425632ba07e9746939e332ec95a2266"} Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.724787 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j7ntf" event={"ID":"e53574ab-8107-4d3d-a695-d64db3bbb908","Type":"ContainerDied","Data":"ac93fcb08156b25d9c5b8dbeb6a8560d02302cda4da4a9f8b9a75cceeae4eec5"} Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.724817 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac93fcb08156b25d9c5b8dbeb6a8560d02302cda4da4a9f8b9a75cceeae4eec5" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.724862 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j7ntf" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.887029 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5c9f9456b6-zflhk"] Dec 05 07:07:22 crc kubenswrapper[4780]: E1205 07:07:22.887682 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e53574ab-8107-4d3d-a695-d64db3bbb908" containerName="keystone-bootstrap" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.887698 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e53574ab-8107-4d3d-a695-d64db3bbb908" containerName="keystone-bootstrap" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.887870 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e53574ab-8107-4d3d-a695-d64db3bbb908" containerName="keystone-bootstrap" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.888761 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.893773 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8svcl" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.894123 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.894299 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.894394 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.894870 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.896036 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.896474 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.897785 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.904592 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c9f9456b6-zflhk"] Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.957966 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.977598 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.987003 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-scripts\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.987083 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-config-data\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.987117 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-credential-keys\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.987141 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvp2h\" (UniqueName: \"kubernetes.io/projected/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-kube-api-access-kvp2h\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.987201 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-public-tls-certs\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.987538 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-combined-ca-bundle\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.987586 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-internal-tls-certs\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:22 crc kubenswrapper[4780]: I1205 07:07:22.987656 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-fernet-keys\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.088787 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-fernet-keys\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.088859 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-scripts\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.089025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-config-data\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.089911 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-credential-keys\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.089976 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvp2h\" (UniqueName: \"kubernetes.io/projected/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-kube-api-access-kvp2h\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.090081 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-public-tls-certs\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.090122 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-combined-ca-bundle\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.090148 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-internal-tls-certs\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.095911 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-fernet-keys\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.096372 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-internal-tls-certs\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.105289 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-public-tls-certs\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.107406 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-credential-keys\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.107643 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-combined-ca-bundle\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.114583 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvp2h\" (UniqueName: \"kubernetes.io/projected/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-kube-api-access-kvp2h\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.118020 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-scripts\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.120363 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-config-data\") pod \"keystone-5c9f9456b6-zflhk\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.216354 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.532129 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c9f9456b6-zflhk"] Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.736249 4780 generic.go:334] "Generic (PLEG): container finished" podID="d8ecabbe-038f-4714-b9a1-5f2efef47afd" containerID="17c12e148aee39d277dd21f751e37c01d7142872900b50c7990ad1ae85ded518" exitCode=0 Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.736385 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gqpwk" event={"ID":"d8ecabbe-038f-4714-b9a1-5f2efef47afd","Type":"ContainerDied","Data":"17c12e148aee39d277dd21f751e37c01d7142872900b50c7990ad1ae85ded518"} Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.739734 4780 generic.go:334] "Generic (PLEG): container finished" podID="44428dc2-af95-4541-b700-7ac3b81164d5" containerID="e110e2d54c09d0e88283f90c73063aeb81b41224ace45de198d5032e23abeb2e" exitCode=0 Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.739803 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mqbgb" event={"ID":"44428dc2-af95-4541-b700-7ac3b81164d5","Type":"ContainerDied","Data":"e110e2d54c09d0e88283f90c73063aeb81b41224ace45de198d5032e23abeb2e"} Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.748364 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83acdff4-818d-4715-875b-0851c4fa04f0","Type":"ContainerStarted","Data":"bca6744717308a0be549da78a3f61dff30b42adb90908a02ce716454f8e3df70"} Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.756696 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c9f9456b6-zflhk" event={"ID":"fb8bb2be-991d-4cb3-b3b9-9175c78019d9","Type":"ContainerStarted","Data":"38735d60427b27a0b905ae0c484929bc6404f9b55ce5830ca95e81314848b8e2"} Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.757433 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.757470 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:23 crc kubenswrapper[4780]: I1205 07:07:23.782486 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.782464105 podStartE2EDuration="3.782464105s" podCreationTimestamp="2025-12-05 07:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:23.781634073 +0000 UTC m=+1277.851150415" watchObservedRunningTime="2025-12-05 07:07:23.782464105 +0000 UTC m=+1277.851980437" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.061936 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.215785 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8d2d\" (UniqueName: \"kubernetes.io/projected/a154e1e8-52d0-43c2-8685-cd8769db58d0-kube-api-access-z8d2d\") pod \"a154e1e8-52d0-43c2-8685-cd8769db58d0\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.215867 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-combined-ca-bundle\") pod \"a154e1e8-52d0-43c2-8685-cd8769db58d0\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.216055 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-db-sync-config-data\") pod \"a154e1e8-52d0-43c2-8685-cd8769db58d0\" (UID: \"a154e1e8-52d0-43c2-8685-cd8769db58d0\") " Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.224073 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a154e1e8-52d0-43c2-8685-cd8769db58d0" (UID: "a154e1e8-52d0-43c2-8685-cd8769db58d0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.224237 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a154e1e8-52d0-43c2-8685-cd8769db58d0-kube-api-access-z8d2d" (OuterVolumeSpecName: "kube-api-access-z8d2d") pod "a154e1e8-52d0-43c2-8685-cd8769db58d0" (UID: "a154e1e8-52d0-43c2-8685-cd8769db58d0"). InnerVolumeSpecName "kube-api-access-z8d2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.256183 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a154e1e8-52d0-43c2-8685-cd8769db58d0" (UID: "a154e1e8-52d0-43c2-8685-cd8769db58d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.318745 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8d2d\" (UniqueName: \"kubernetes.io/projected/a154e1e8-52d0-43c2-8685-cd8769db58d0-kube-api-access-z8d2d\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.318771 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.318782 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a154e1e8-52d0-43c2-8685-cd8769db58d0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.789218 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c9f9456b6-zflhk" event={"ID":"fb8bb2be-991d-4cb3-b3b9-9175c78019d9","Type":"ContainerStarted","Data":"e2272792c63e1f2159b6320d0d6009da818ee74ca29907f48e07464acca5482a"} Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.789599 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.794395 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5f9r8" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.797028 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5f9r8" event={"ID":"a154e1e8-52d0-43c2-8685-cd8769db58d0","Type":"ContainerDied","Data":"f06e09fc35c4d336404fccf3dea7b74b20124dce6865979db667632f3fb6db80"} Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.797064 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f06e09fc35c4d336404fccf3dea7b74b20124dce6865979db667632f3fb6db80" Dec 05 07:07:24 crc kubenswrapper[4780]: I1205 07:07:24.817585 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5c9f9456b6-zflhk" podStartSLOduration=2.8175656460000003 podStartE2EDuration="2.817565646s" podCreationTimestamp="2025-12-05 07:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:24.815595272 +0000 UTC m=+1278.885111614" watchObservedRunningTime="2025-12-05 07:07:24.817565646 +0000 UTC m=+1278.887081978" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.073807 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-fc64465bd-vwr2q"] Dec 05 07:07:25 crc kubenswrapper[4780]: E1205 07:07:25.074232 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a154e1e8-52d0-43c2-8685-cd8769db58d0" containerName="barbican-db-sync" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.074249 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a154e1e8-52d0-43c2-8685-cd8769db58d0" containerName="barbican-db-sync" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.074434 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a154e1e8-52d0-43c2-8685-cd8769db58d0" containerName="barbican-db-sync" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.076580 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.084048 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-g2dxj" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.084243 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.085827 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.102950 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-59d58fb65c-nzf5k"] Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.104690 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.108302 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.145516 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fc64465bd-vwr2q"] Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.190064 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59d58fb65c-nzf5k"] Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.238112 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-combined-ca-bundle\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.238492 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-combined-ca-bundle\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.238599 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.238692 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.238838 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8pj5\" (UniqueName: \"kubernetes.io/projected/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-kube-api-access-j8pj5\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.239079 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data-custom\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.239196 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-logs\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.239346 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qslrx\" (UniqueName: \"kubernetes.io/projected/8d9c218c-8cf4-468d-a946-bb14fc0024b0-kube-api-access-qslrx\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.239541 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data-custom\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.239698 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d9c218c-8cf4-468d-a946-bb14fc0024b0-logs\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.249739 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcf9cb85-72d6t"] Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.251266 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.264467 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcf9cb85-72d6t"] Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.341396 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data-custom\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.341967 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-logs\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342049 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qslrx\" (UniqueName: \"kubernetes.io/projected/8d9c218c-8cf4-468d-a946-bb14fc0024b0-kube-api-access-qslrx\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342107 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j85cn\" (UniqueName: \"kubernetes.io/projected/e337b12d-84a8-47e6-8d3f-89c15f0b547d-kube-api-access-j85cn\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342137 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data-custom\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342179 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d9c218c-8cf4-468d-a946-bb14fc0024b0-logs\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342209 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-config\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342275 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-combined-ca-bundle\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342312 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-nb\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342343 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-combined-ca-bundle\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342375 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-svc\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342397 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342422 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342451 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-swift-storage-0\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342480 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-sb\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.342512 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8pj5\" (UniqueName: \"kubernetes.io/projected/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-kube-api-access-j8pj5\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.344757 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-logs\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.348825 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d9c218c-8cf4-468d-a946-bb14fc0024b0-logs\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.353976 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data-custom\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.356933 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-combined-ca-bundle\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.358017 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.359807 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data-custom\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.371395 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-combined-ca-bundle\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.371670 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.372431 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8pj5\" (UniqueName: \"kubernetes.io/projected/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-kube-api-access-j8pj5\") pod \"barbican-keystone-listener-fc64465bd-vwr2q\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.400920 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qslrx\" (UniqueName: \"kubernetes.io/projected/8d9c218c-8cf4-468d-a946-bb14fc0024b0-kube-api-access-qslrx\") pod \"barbican-worker-59d58fb65c-nzf5k\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.432679 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.447978 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-config\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.448071 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-nb\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.448101 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-svc\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.448124 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-swift-storage-0\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.448150 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-sb\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.448210 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j85cn\" (UniqueName: \"kubernetes.io/projected/e337b12d-84a8-47e6-8d3f-89c15f0b547d-kube-api-access-j85cn\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.449579 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-config\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.450336 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-nb\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.453797 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-svc\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.454133 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.455004 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-swift-storage-0\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.455542 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-sb\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.479503 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d77f6ccb8-bx2bz"] Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.492186 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.496180 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.510030 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d77f6ccb8-bx2bz"] Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.510587 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j85cn\" (UniqueName: \"kubernetes.io/projected/e337b12d-84a8-47e6-8d3f-89c15f0b547d-kube-api-access-j85cn\") pod \"dnsmasq-dns-fcf9cb85-72d6t\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.559405 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-combined-ca-bundle\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.559655 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data-custom\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.559682 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4ea088-089a-4945-b914-c58bfec9c403-logs\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.559775 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2ls\" (UniqueName: \"kubernetes.io/projected/9b4ea088-089a-4945-b914-c58bfec9c403-kube-api-access-pq2ls\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.559813 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.646594 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.662400 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-combined-ca-bundle\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.662534 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data-custom\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.662577 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4ea088-089a-4945-b914-c58bfec9c403-logs\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.662631 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq2ls\" (UniqueName: \"kubernetes.io/projected/9b4ea088-089a-4945-b914-c58bfec9c403-kube-api-access-pq2ls\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.662660 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.668395 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4ea088-089a-4945-b914-c58bfec9c403-logs\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.674601 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.678007 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data-custom\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.691981 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq2ls\" (UniqueName: \"kubernetes.io/projected/9b4ea088-089a-4945-b914-c58bfec9c403-kube-api-access-pq2ls\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.696556 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-combined-ca-bundle\") pod \"barbican-api-d77f6ccb8-bx2bz\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:25 crc kubenswrapper[4780]: I1205 07:07:25.903225 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:26 crc kubenswrapper[4780]: I1205 07:07:26.192832 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:26 crc kubenswrapper[4780]: I1205 07:07:26.193264 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 07:07:26 crc kubenswrapper[4780]: I1205 07:07:26.493088 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.823752 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-799c48f5f4-sm7kz"] Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.827128 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.837320 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.837511 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.857631 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-799c48f5f4-sm7kz"] Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.940807 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-internal-tls-certs\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.940981 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89pf4\" (UniqueName: \"kubernetes.io/projected/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-kube-api-access-89pf4\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.941051 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-public-tls-certs\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.941259 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data-custom\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.941335 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-logs\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.941411 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:28 crc kubenswrapper[4780]: I1205 07:07:28.941535 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-combined-ca-bundle\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.043306 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data-custom\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.043359 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-logs\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.043400 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.043445 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-combined-ca-bundle\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.043494 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-internal-tls-certs\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.043527 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89pf4\" (UniqueName: \"kubernetes.io/projected/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-kube-api-access-89pf4\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.043552 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-public-tls-certs\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.044356 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-logs\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.051615 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data-custom\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.051851 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-combined-ca-bundle\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.052472 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-public-tls-certs\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.053120 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-internal-tls-certs\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.053339 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.065565 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89pf4\" (UniqueName: \"kubernetes.io/projected/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-kube-api-access-89pf4\") pod \"barbican-api-799c48f5f4-sm7kz\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:29 crc kubenswrapper[4780]: I1205 07:07:29.168907 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.486920 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.538091 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mqbgb" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.571527 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmhvj\" (UniqueName: \"kubernetes.io/projected/d8ecabbe-038f-4714-b9a1-5f2efef47afd-kube-api-access-zmhvj\") pod \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.571578 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44428dc2-af95-4541-b700-7ac3b81164d5-logs\") pod \"44428dc2-af95-4541-b700-7ac3b81164d5\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.571621 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-combined-ca-bundle\") pod \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.571642 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-config-data\") pod \"44428dc2-af95-4541-b700-7ac3b81164d5\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.571663 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds5h4\" (UniqueName: \"kubernetes.io/projected/44428dc2-af95-4541-b700-7ac3b81164d5-kube-api-access-ds5h4\") pod \"44428dc2-af95-4541-b700-7ac3b81164d5\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.571704 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-combined-ca-bundle\") pod \"44428dc2-af95-4541-b700-7ac3b81164d5\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.571817 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-config\") pod \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\" (UID: \"d8ecabbe-038f-4714-b9a1-5f2efef47afd\") " Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.571855 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-scripts\") pod \"44428dc2-af95-4541-b700-7ac3b81164d5\" (UID: \"44428dc2-af95-4541-b700-7ac3b81164d5\") " Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.572678 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44428dc2-af95-4541-b700-7ac3b81164d5-logs" (OuterVolumeSpecName: "logs") pod "44428dc2-af95-4541-b700-7ac3b81164d5" (UID: "44428dc2-af95-4541-b700-7ac3b81164d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.573961 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44428dc2-af95-4541-b700-7ac3b81164d5-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.577071 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-scripts" (OuterVolumeSpecName: "scripts") pod "44428dc2-af95-4541-b700-7ac3b81164d5" (UID: "44428dc2-af95-4541-b700-7ac3b81164d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.578445 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44428dc2-af95-4541-b700-7ac3b81164d5-kube-api-access-ds5h4" (OuterVolumeSpecName: "kube-api-access-ds5h4") pod "44428dc2-af95-4541-b700-7ac3b81164d5" (UID: "44428dc2-af95-4541-b700-7ac3b81164d5"). InnerVolumeSpecName "kube-api-access-ds5h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.608761 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ecabbe-038f-4714-b9a1-5f2efef47afd-kube-api-access-zmhvj" (OuterVolumeSpecName: "kube-api-access-zmhvj") pod "d8ecabbe-038f-4714-b9a1-5f2efef47afd" (UID: "d8ecabbe-038f-4714-b9a1-5f2efef47afd"). InnerVolumeSpecName "kube-api-access-zmhvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.614151 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8ecabbe-038f-4714-b9a1-5f2efef47afd" (UID: "d8ecabbe-038f-4714-b9a1-5f2efef47afd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.618403 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-config" (OuterVolumeSpecName: "config") pod "d8ecabbe-038f-4714-b9a1-5f2efef47afd" (UID: "d8ecabbe-038f-4714-b9a1-5f2efef47afd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.631227 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-config-data" (OuterVolumeSpecName: "config-data") pod "44428dc2-af95-4541-b700-7ac3b81164d5" (UID: "44428dc2-af95-4541-b700-7ac3b81164d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.631930 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44428dc2-af95-4541-b700-7ac3b81164d5" (UID: "44428dc2-af95-4541-b700-7ac3b81164d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.676025 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmhvj\" (UniqueName: \"kubernetes.io/projected/d8ecabbe-038f-4714-b9a1-5f2efef47afd-kube-api-access-zmhvj\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.676060 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.676069 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.676078 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds5h4\" (UniqueName: \"kubernetes.io/projected/44428dc2-af95-4541-b700-7ac3b81164d5-kube-api-access-ds5h4\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.676089 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.676098 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8ecabbe-038f-4714-b9a1-5f2efef47afd-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.676106 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44428dc2-af95-4541-b700-7ac3b81164d5-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:30 crc kubenswrapper[4780]: E1205 07:07:30.702940 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.858934 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-799c48f5f4-sm7kz"] Dec 05 07:07:30 crc kubenswrapper[4780]: W1205 07:07:30.863546 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b8df94_a979_4c1a_bffd_5f5052f0ad12.slice/crio-691e3590f26e2c0cb63fa1d43d7770b0258ca9a4136427ced2921a76008669bf WatchSource:0}: Error finding container 691e3590f26e2c0cb63fa1d43d7770b0258ca9a4136427ced2921a76008669bf: Status 404 returned error can't find the container with id 691e3590f26e2c0cb63fa1d43d7770b0258ca9a4136427ced2921a76008669bf Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.875735 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gqpwk" event={"ID":"d8ecabbe-038f-4714-b9a1-5f2efef47afd","Type":"ContainerDied","Data":"8032c2681e5d0b3b5078b4138f13134697a37751c650aad9edd41872d5616aae"} Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.875783 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8032c2681e5d0b3b5078b4138f13134697a37751c650aad9edd41872d5616aae" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.875752 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gqpwk" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.878834 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mqbgb" event={"ID":"44428dc2-af95-4541-b700-7ac3b81164d5","Type":"ContainerDied","Data":"a2d0ec2907a5dab1762ce2556bea52e4ebc24e4524d98deac46212e6bf4ae6f2"} Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.878904 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2d0ec2907a5dab1762ce2556bea52e4ebc24e4524d98deac46212e6bf4ae6f2" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.878899 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mqbgb" Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.882454 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995ff9d4-9da8-471e-a696-aefb4ebbf473","Type":"ContainerStarted","Data":"cb842b50001bf76fa89c013926f3979f8fdf7ef0abc766b6eed703795fd049a4"} Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.882670 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="ceilometer-notification-agent" containerID="cri-o://5a330b49dc443f51f2a618fb3f55aadf083dcb8a1047463013ce92f795d30da6" gracePeriod=30 Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.882800 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="sg-core" containerID="cri-o://c51d113e6b9bf551f86be82d2e03a05389e6b696f9ad164dfbb8351b5090597f" gracePeriod=30 Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.882863 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="proxy-httpd" containerID="cri-o://cb842b50001bf76fa89c013926f3979f8fdf7ef0abc766b6eed703795fd049a4" gracePeriod=30 Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.946253 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d77f6ccb8-bx2bz"] Dec 05 07:07:30 crc kubenswrapper[4780]: I1205 07:07:30.955193 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcf9cb85-72d6t"] Dec 05 07:07:31 crc kubenswrapper[4780]: W1205 07:07:31.031711 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa86c0d1_d6cb_4566_b4b3_352c690b0a96.slice/crio-59ea98a0657b0d87366cb8050ab1169c2319aa811587ad690f9b0fc67368058c WatchSource:0}: Error finding container 59ea98a0657b0d87366cb8050ab1169c2319aa811587ad690f9b0fc67368058c: Status 404 returned error can't find the container with id 59ea98a0657b0d87366cb8050ab1169c2319aa811587ad690f9b0fc67368058c Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.033097 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59d58fb65c-nzf5k"] Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.050931 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fc64465bd-vwr2q"] Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.132707 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.132754 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.186231 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.202127 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.696529 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-669bccb86b-8cjsq"] Dec 05 07:07:31 crc kubenswrapper[4780]: E1205 07:07:31.697567 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ecabbe-038f-4714-b9a1-5f2efef47afd" containerName="neutron-db-sync" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.697581 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ecabbe-038f-4714-b9a1-5f2efef47afd" containerName="neutron-db-sync" Dec 05 07:07:31 crc kubenswrapper[4780]: E1205 07:07:31.697616 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44428dc2-af95-4541-b700-7ac3b81164d5" containerName="placement-db-sync" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.697626 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="44428dc2-af95-4541-b700-7ac3b81164d5" containerName="placement-db-sync" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.697824 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ecabbe-038f-4714-b9a1-5f2efef47afd" containerName="neutron-db-sync" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.697839 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="44428dc2-af95-4541-b700-7ac3b81164d5" containerName="placement-db-sync" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.699037 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.702785 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.702920 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.702798 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.703061 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.703253 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mtf4p" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.712149 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-669bccb86b-8cjsq"] Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.778561 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcf9cb85-72d6t"] Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.800063 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-combined-ca-bundle\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.800132 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-public-tls-certs\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.800158 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-scripts\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.800236 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-internal-tls-certs\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.800297 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-logs\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.800362 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-config-data\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.800388 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ff5k\" (UniqueName: \"kubernetes.io/projected/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-kube-api-access-4ff5k\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.827432 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5768d59dd9-ftl9r"] Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.828948 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.859952 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5768d59dd9-ftl9r"] Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902005 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-sb\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902066 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-nb\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902120 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-logs\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902156 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-svc\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902182 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-swift-storage-0\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902220 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-config-data\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902245 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ff5k\" (UniqueName: \"kubernetes.io/projected/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-kube-api-access-4ff5k\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902274 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2df7k\" (UniqueName: \"kubernetes.io/projected/23dd56ae-e398-46f7-9a63-b5034ae7e76a-kube-api-access-2df7k\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902344 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-config\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902402 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-combined-ca-bundle\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902443 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-public-tls-certs\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902471 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-scripts\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.902518 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-internal-tls-certs\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.906765 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c469598fb-5vvx6"] Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.908370 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.932253 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-config-data\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.969170 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-internal-tls-certs\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.971406 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-logs\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:31 crc kubenswrapper[4780]: I1205 07:07:31.980082 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-public-tls-certs\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.002382 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.012069 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d6h5f" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.012907 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.021642 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-config\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.021983 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-sb\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.022085 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-nb\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.022245 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-svc\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.022366 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-swift-storage-0\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.022503 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2df7k\" (UniqueName: \"kubernetes.io/projected/23dd56ae-e398-46f7-9a63-b5034ae7e76a-kube-api-access-2df7k\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.022554 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.025226 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-nb\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.026685 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-scripts\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.036222 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-swift-storage-0\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.036287 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-config\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.038420 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-svc\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.064782 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ff5k\" (UniqueName: \"kubernetes.io/projected/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-kube-api-access-4ff5k\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.068482 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-sb\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.068607 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2df7k\" (UniqueName: \"kubernetes.io/projected/23dd56ae-e398-46f7-9a63-b5034ae7e76a-kube-api-access-2df7k\") pod \"dnsmasq-dns-5768d59dd9-ftl9r\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.070224 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-combined-ca-bundle\") pod \"placement-669bccb86b-8cjsq\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.099592 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c469598fb-5vvx6"] Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.117941 4780 generic.go:334] "Generic (PLEG): container finished" podID="e337b12d-84a8-47e6-8d3f-89c15f0b547d" containerID="ad68d542037ed1614224516474d2e4ef6e33875d9bce1dcf46e5f3cb65e27b0a" exitCode=0 Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.118339 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" event={"ID":"e337b12d-84a8-47e6-8d3f-89c15f0b547d","Type":"ContainerDied","Data":"ad68d542037ed1614224516474d2e4ef6e33875d9bce1dcf46e5f3cb65e27b0a"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.118402 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" event={"ID":"e337b12d-84a8-47e6-8d3f-89c15f0b547d","Type":"ContainerStarted","Data":"a333a1cc7377b5b3c1683c68b1d7d66299d3cdb5086a1c0573da7b0a8a47fa72"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.127899 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-combined-ca-bundle\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.128026 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-config\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.128267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqpzq\" (UniqueName: \"kubernetes.io/projected/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-kube-api-access-hqpzq\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.128630 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-ovndb-tls-certs\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.128742 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-httpd-config\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.162393 4780 generic.go:334] "Generic (PLEG): container finished" podID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerID="cb842b50001bf76fa89c013926f3979f8fdf7ef0abc766b6eed703795fd049a4" exitCode=0 Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.162436 4780 generic.go:334] "Generic (PLEG): container finished" podID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerID="c51d113e6b9bf551f86be82d2e03a05389e6b696f9ad164dfbb8351b5090597f" exitCode=2 Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.187764 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995ff9d4-9da8-471e-a696-aefb4ebbf473","Type":"ContainerDied","Data":"cb842b50001bf76fa89c013926f3979f8fdf7ef0abc766b6eed703795fd049a4"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.187815 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995ff9d4-9da8-471e-a696-aefb4ebbf473","Type":"ContainerDied","Data":"c51d113e6b9bf551f86be82d2e03a05389e6b696f9ad164dfbb8351b5090597f"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.187829 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" event={"ID":"aa86c0d1-d6cb-4566-b4b3-352c690b0a96","Type":"ContainerStarted","Data":"59ea98a0657b0d87366cb8050ab1169c2319aa811587ad690f9b0fc67368058c"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.196800 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59d58fb65c-nzf5k" event={"ID":"8d9c218c-8cf4-468d-a946-bb14fc0024b0","Type":"ContainerStarted","Data":"d53dc0d388327b410b5f7b526c5655ce1ba3965c6cef72550c990c438f6723b0"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.203835 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.210144 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-799c48f5f4-sm7kz" event={"ID":"a6b8df94-a979-4c1a-bffd-5f5052f0ad12","Type":"ContainerStarted","Data":"d247be1b147a98f7d05a4bb3c8635747189f02eca874ffceb138264c83747cc4"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.210199 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-799c48f5f4-sm7kz" event={"ID":"a6b8df94-a979-4c1a-bffd-5f5052f0ad12","Type":"ContainerStarted","Data":"1f72197d67bb45b009e4fc63d14efd6e5634ae9d06e9c8d83b9c4a8b9a6be45a"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.210212 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-799c48f5f4-sm7kz" event={"ID":"a6b8df94-a979-4c1a-bffd-5f5052f0ad12","Type":"ContainerStarted","Data":"691e3590f26e2c0cb63fa1d43d7770b0258ca9a4136427ced2921a76008669bf"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.210541 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.210595 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.231167 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqpzq\" (UniqueName: \"kubernetes.io/projected/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-kube-api-access-hqpzq\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.232441 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-ovndb-tls-certs\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.232546 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-httpd-config\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.232637 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-combined-ca-bundle\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.232703 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-config\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.235156 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7kv9n" event={"ID":"c67223f0-4471-424c-b74d-886cec703c8a","Type":"ContainerStarted","Data":"1149f20bd04bc2a2bf513a262f145e4eb15d251702407c87dc7d603d89e3e28d"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.239131 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-config\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.243526 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d77f6ccb8-bx2bz" event={"ID":"9b4ea088-089a-4945-b914-c58bfec9c403","Type":"ContainerStarted","Data":"8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.243613 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d77f6ccb8-bx2bz" event={"ID":"9b4ea088-089a-4945-b914-c58bfec9c403","Type":"ContainerStarted","Data":"4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.243626 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d77f6ccb8-bx2bz" event={"ID":"9b4ea088-089a-4945-b914-c58bfec9c403","Type":"ContainerStarted","Data":"19bd9e1949876468ebd118cfcb26979b198a5fe8f3da1eede77925521a66b7ba"} Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.246463 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-httpd-config\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.250792 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-799c48f5f4-sm7kz" podStartSLOduration=4.250763861 podStartE2EDuration="4.250763861s" podCreationTimestamp="2025-12-05 07:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:32.247515042 +0000 UTC m=+1286.317031394" watchObservedRunningTime="2025-12-05 07:07:32.250763861 +0000 UTC m=+1286.320280203" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.255427 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.261871 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.257012 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-ovndb-tls-certs\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.262834 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-combined-ca-bundle\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.266489 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqpzq\" (UniqueName: \"kubernetes.io/projected/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-kube-api-access-hqpzq\") pod \"neutron-5c469598fb-5vvx6\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.274774 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podStartSLOduration=7.274756107 podStartE2EDuration="7.274756107s" podCreationTimestamp="2025-12-05 07:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:32.271996173 +0000 UTC m=+1286.341512515" watchObservedRunningTime="2025-12-05 07:07:32.274756107 +0000 UTC m=+1286.344272439" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.301226 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7kv9n" podStartSLOduration=4.1357863009999996 podStartE2EDuration="47.30120219s" podCreationTimestamp="2025-12-05 07:06:45 +0000 UTC" firstStartedPulling="2025-12-05 07:06:47.280828275 +0000 UTC m=+1241.350344607" lastFinishedPulling="2025-12-05 07:07:30.446244164 +0000 UTC m=+1284.515760496" observedRunningTime="2025-12-05 07:07:32.296134113 +0000 UTC m=+1286.365650465" watchObservedRunningTime="2025-12-05 07:07:32.30120219 +0000 UTC m=+1286.370718522" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.345164 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:32 crc kubenswrapper[4780]: I1205 07:07:32.503300 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:33 crc kubenswrapper[4780]: E1205 07:07:33.040623 4780 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 05 07:07:33 crc kubenswrapper[4780]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/e337b12d-84a8-47e6-8d3f-89c15f0b547d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 05 07:07:33 crc kubenswrapper[4780]: > podSandboxID="a333a1cc7377b5b3c1683c68b1d7d66299d3cdb5086a1c0573da7b0a8a47fa72" Dec 05 07:07:33 crc kubenswrapper[4780]: E1205 07:07:33.041208 4780 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 05 07:07:33 crc kubenswrapper[4780]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n574hbch97h666hbbh5fch555h5ddh649h699hf4h9ch6h699h55h5b7h5b9h5d5hf6h686h5cfh599h594h559h645h699h55h5f8h54ch555h55bh655q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j85cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-fcf9cb85-72d6t_openstack(e337b12d-84a8-47e6-8d3f-89c15f0b547d): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/e337b12d-84a8-47e6-8d3f-89c15f0b547d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 05 07:07:33 crc kubenswrapper[4780]: > logger="UnhandledError" Dec 05 07:07:33 crc kubenswrapper[4780]: E1205 07:07:33.042529 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/e337b12d-84a8-47e6-8d3f-89c15f0b547d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" podUID="e337b12d-84a8-47e6-8d3f-89c15f0b547d" Dec 05 07:07:33 crc kubenswrapper[4780]: I1205 07:07:33.092779 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5768d59dd9-ftl9r"] Dec 05 07:07:33 crc kubenswrapper[4780]: W1205 07:07:33.117173 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23dd56ae_e398_46f7_9a63_b5034ae7e76a.slice/crio-4af34759bbdc9fd4c8c47747a18f7ea455240573f9e1ecd2eac399ee2be76f2c WatchSource:0}: Error finding container 4af34759bbdc9fd4c8c47747a18f7ea455240573f9e1ecd2eac399ee2be76f2c: Status 404 returned error can't find the container with id 4af34759bbdc9fd4c8c47747a18f7ea455240573f9e1ecd2eac399ee2be76f2c Dec 05 07:07:33 crc kubenswrapper[4780]: I1205 07:07:33.185898 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-669bccb86b-8cjsq"] Dec 05 07:07:33 crc kubenswrapper[4780]: I1205 07:07:33.267074 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" event={"ID":"23dd56ae-e398-46f7-9a63-b5034ae7e76a","Type":"ContainerStarted","Data":"4af34759bbdc9fd4c8c47747a18f7ea455240573f9e1ecd2eac399ee2be76f2c"} Dec 05 07:07:33 crc kubenswrapper[4780]: I1205 07:07:33.267426 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:33 crc kubenswrapper[4780]: I1205 07:07:33.267453 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:33 crc kubenswrapper[4780]: I1205 07:07:33.602542 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c469598fb-5vvx6"] Dec 05 07:07:33 crc kubenswrapper[4780]: W1205 07:07:33.709168 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32bd2816_4963_4acd_b1c0_3629dd1c2c3a.slice/crio-e5a596b6015a96f1ad86b419c82272790aa737a6a43eb72ce0211fd404193a1f WatchSource:0}: Error finding container e5a596b6015a96f1ad86b419c82272790aa737a6a43eb72ce0211fd404193a1f: Status 404 returned error can't find the container with id e5a596b6015a96f1ad86b419c82272790aa737a6a43eb72ce0211fd404193a1f Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.276055 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" event={"ID":"e337b12d-84a8-47e6-8d3f-89c15f0b547d","Type":"ContainerDied","Data":"a333a1cc7377b5b3c1683c68b1d7d66299d3cdb5086a1c0573da7b0a8a47fa72"} Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.276573 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a333a1cc7377b5b3c1683c68b1d7d66299d3cdb5086a1c0573da7b0a8a47fa72" Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.278726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669bccb86b-8cjsq" event={"ID":"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7","Type":"ContainerStarted","Data":"a8681be4c4b6474dc591d34b256be26ca28c2b2b591faad0d1a0ae0a713d826c"} Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.279720 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c469598fb-5vvx6" event={"ID":"32bd2816-4963-4acd-b1c0-3629dd1c2c3a","Type":"ContainerStarted","Data":"e5a596b6015a96f1ad86b419c82272790aa737a6a43eb72ce0211fd404193a1f"} Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.279820 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.279839 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.622583 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.696472 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-sb\") pod \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.696530 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-config\") pod \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.696582 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-nb\") pod \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.696604 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-svc\") pod \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.696630 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-swift-storage-0\") pod \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.696660 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j85cn\" (UniqueName: \"kubernetes.io/projected/e337b12d-84a8-47e6-8d3f-89c15f0b547d-kube-api-access-j85cn\") pod \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\" (UID: \"e337b12d-84a8-47e6-8d3f-89c15f0b547d\") " Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.705555 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e337b12d-84a8-47e6-8d3f-89c15f0b547d-kube-api-access-j85cn" (OuterVolumeSpecName: "kube-api-access-j85cn") pod "e337b12d-84a8-47e6-8d3f-89c15f0b547d" (UID: "e337b12d-84a8-47e6-8d3f-89c15f0b547d"). InnerVolumeSpecName "kube-api-access-j85cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.798076 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j85cn\" (UniqueName: \"kubernetes.io/projected/e337b12d-84a8-47e6-8d3f-89c15f0b547d-kube-api-access-j85cn\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.802317 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.802691 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.945302 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e337b12d-84a8-47e6-8d3f-89c15f0b547d" (UID: "e337b12d-84a8-47e6-8d3f-89c15f0b547d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.992671 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e337b12d-84a8-47e6-8d3f-89c15f0b547d" (UID: "e337b12d-84a8-47e6-8d3f-89c15f0b547d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:34 crc kubenswrapper[4780]: I1205 07:07:34.997262 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-config" (OuterVolumeSpecName: "config") pod "e337b12d-84a8-47e6-8d3f-89c15f0b547d" (UID: "e337b12d-84a8-47e6-8d3f-89c15f0b547d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.000220 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.000259 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.000316 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.008070 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e337b12d-84a8-47e6-8d3f-89c15f0b547d" (UID: "e337b12d-84a8-47e6-8d3f-89c15f0b547d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.009229 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e337b12d-84a8-47e6-8d3f-89c15f0b547d" (UID: "e337b12d-84a8-47e6-8d3f-89c15f0b547d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.101595 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.101623 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e337b12d-84a8-47e6-8d3f-89c15f0b547d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.290503 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59d58fb65c-nzf5k" event={"ID":"8d9c218c-8cf4-468d-a946-bb14fc0024b0","Type":"ContainerStarted","Data":"54efef79e6df78f9c7a79be7c0902ee44a3970e79099cd25bf9047386200ff4c"} Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.290546 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59d58fb65c-nzf5k" event={"ID":"8d9c218c-8cf4-468d-a946-bb14fc0024b0","Type":"ContainerStarted","Data":"fa0a6343d445a98183bd0e28c4205f4ee3dbabc1af80c9794439de122f2d4f70"} Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.293796 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669bccb86b-8cjsq" event={"ID":"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7","Type":"ContainerStarted","Data":"0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e"} Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.293844 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669bccb86b-8cjsq" event={"ID":"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7","Type":"ContainerStarted","Data":"43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7"} Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.294033 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.299856 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c469598fb-5vvx6" event={"ID":"32bd2816-4963-4acd-b1c0-3629dd1c2c3a","Type":"ContainerStarted","Data":"d6143d4b965cf16bd3eb3b2d8846c785f1dfb5782c3049ff122d6f9bc135d91f"} Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.299916 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c469598fb-5vvx6" event={"ID":"32bd2816-4963-4acd-b1c0-3629dd1c2c3a","Type":"ContainerStarted","Data":"209ef2daa7285ce41358b37abc553d7c949e87a567a4f404d81a57a426b0af45"} Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.300013 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.302702 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" event={"ID":"aa86c0d1-d6cb-4566-b4b3-352c690b0a96","Type":"ContainerStarted","Data":"21cb52d533dbe56f4988844a69a64aaf8e041956d1ff9074d70672e4e95db8ee"} Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.302769 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" event={"ID":"aa86c0d1-d6cb-4566-b4b3-352c690b0a96","Type":"ContainerStarted","Data":"0eb1f9f781814534359ecc748e52c6e4547659a97d7852a9bc35e6e85c9c72d4"} Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.304071 4780 generic.go:334] "Generic (PLEG): container finished" podID="23dd56ae-e398-46f7-9a63-b5034ae7e76a" containerID="14389c2b6f46f94fad0145e597561b5dc143a37d762e36a2015ea5c4fb6e87ea" exitCode=0 Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.304119 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" event={"ID":"23dd56ae-e398-46f7-9a63-b5034ae7e76a","Type":"ContainerDied","Data":"14389c2b6f46f94fad0145e597561b5dc143a37d762e36a2015ea5c4fb6e87ea"} Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.304145 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcf9cb85-72d6t" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.312130 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-59d58fb65c-nzf5k" podStartSLOduration=7.207906515 podStartE2EDuration="10.312107754s" podCreationTimestamp="2025-12-05 07:07:25 +0000 UTC" firstStartedPulling="2025-12-05 07:07:31.031991198 +0000 UTC m=+1285.101507530" lastFinishedPulling="2025-12-05 07:07:34.136192437 +0000 UTC m=+1288.205708769" observedRunningTime="2025-12-05 07:07:35.309046502 +0000 UTC m=+1289.378562824" watchObservedRunningTime="2025-12-05 07:07:35.312107754 +0000 UTC m=+1289.381624086" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.353381 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c469598fb-5vvx6" podStartSLOduration=4.353360486 podStartE2EDuration="4.353360486s" podCreationTimestamp="2025-12-05 07:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:35.344322313 +0000 UTC m=+1289.413838645" watchObservedRunningTime="2025-12-05 07:07:35.353360486 +0000 UTC m=+1289.422876818" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.412658 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-669bccb86b-8cjsq" podStartSLOduration=4.412642545 podStartE2EDuration="4.412642545s" podCreationTimestamp="2025-12-05 07:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:35.412286176 +0000 UTC m=+1289.481802508" watchObservedRunningTime="2025-12-05 07:07:35.412642545 +0000 UTC m=+1289.482158877" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.461692 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" podStartSLOduration=7.383387706 podStartE2EDuration="10.461667677s" podCreationTimestamp="2025-12-05 07:07:25 +0000 UTC" firstStartedPulling="2025-12-05 07:07:31.057332891 +0000 UTC m=+1285.126849223" lastFinishedPulling="2025-12-05 07:07:34.135612852 +0000 UTC m=+1288.205129194" observedRunningTime="2025-12-05 07:07:35.434402972 +0000 UTC m=+1289.503919344" watchObservedRunningTime="2025-12-05 07:07:35.461667677 +0000 UTC m=+1289.531184009" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.524277 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d68479b85-xqbrx"] Dec 05 07:07:35 crc kubenswrapper[4780]: E1205 07:07:35.524644 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e337b12d-84a8-47e6-8d3f-89c15f0b547d" containerName="init" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.524655 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e337b12d-84a8-47e6-8d3f-89c15f0b547d" containerName="init" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.524842 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e337b12d-84a8-47e6-8d3f-89c15f0b547d" containerName="init" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.525788 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.529910 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.532347 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.547180 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d68479b85-xqbrx"] Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.605078 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcf9cb85-72d6t"] Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.612817 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-ovndb-tls-certs\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.613055 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-internal-tls-certs\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.613203 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-combined-ca-bundle\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.613315 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-public-tls-certs\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.613446 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-httpd-config\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.613559 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-config\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.613671 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq62h\" (UniqueName: \"kubernetes.io/projected/e5443f43-c1d5-4563-a28c-63b54fd78ee6-kube-api-access-zq62h\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.626210 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcf9cb85-72d6t"] Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.715300 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-combined-ca-bundle\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.715360 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-public-tls-certs\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.715395 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-httpd-config\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.715420 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-config\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.715444 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq62h\" (UniqueName: \"kubernetes.io/projected/e5443f43-c1d5-4563-a28c-63b54fd78ee6-kube-api-access-zq62h\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.715535 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-ovndb-tls-certs\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.715579 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-internal-tls-certs\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.722198 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-public-tls-certs\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.724776 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-config\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.725543 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-combined-ca-bundle\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.728426 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-internal-tls-certs\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.734257 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-ovndb-tls-certs\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.735967 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-httpd-config\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.737712 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq62h\" (UniqueName: \"kubernetes.io/projected/e5443f43-c1d5-4563-a28c-63b54fd78ee6-kube-api-access-zq62h\") pod \"neutron-7d68479b85-xqbrx\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:35 crc kubenswrapper[4780]: I1205 07:07:35.845044 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.153627 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e337b12d-84a8-47e6-8d3f-89c15f0b547d" path="/var/lib/kubelet/pods/e337b12d-84a8-47e6-8d3f-89c15f0b547d/volumes" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.346998 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" event={"ID":"23dd56ae-e398-46f7-9a63-b5034ae7e76a","Type":"ContainerStarted","Data":"05573f458b66fc4aa4673c7f8fe2257772c5df4d67adcec98d9ffdcad3c8d2f8"} Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.348585 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.370387 4780 generic.go:334] "Generic (PLEG): container finished" podID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerID="5a330b49dc443f51f2a618fb3f55aadf083dcb8a1047463013ce92f795d30da6" exitCode=0 Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.370659 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995ff9d4-9da8-471e-a696-aefb4ebbf473","Type":"ContainerDied","Data":"5a330b49dc443f51f2a618fb3f55aadf083dcb8a1047463013ce92f795d30da6"} Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.373099 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.374609 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" podStartSLOduration=5.374597793 podStartE2EDuration="5.374597793s" podCreationTimestamp="2025-12-05 07:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:36.373302978 +0000 UTC m=+1290.442819300" watchObservedRunningTime="2025-12-05 07:07:36.374597793 +0000 UTC m=+1290.444114125" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.441288 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d68479b85-xqbrx"] Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.447616 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.539271 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-config-data\") pod \"995ff9d4-9da8-471e-a696-aefb4ebbf473\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.539342 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-combined-ca-bundle\") pod \"995ff9d4-9da8-471e-a696-aefb4ebbf473\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.539393 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-log-httpd\") pod \"995ff9d4-9da8-471e-a696-aefb4ebbf473\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.539447 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-scripts\") pod \"995ff9d4-9da8-471e-a696-aefb4ebbf473\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.539511 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-run-httpd\") pod \"995ff9d4-9da8-471e-a696-aefb4ebbf473\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.539562 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-sg-core-conf-yaml\") pod \"995ff9d4-9da8-471e-a696-aefb4ebbf473\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.539593 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ldj6\" (UniqueName: \"kubernetes.io/projected/995ff9d4-9da8-471e-a696-aefb4ebbf473-kube-api-access-2ldj6\") pod \"995ff9d4-9da8-471e-a696-aefb4ebbf473\" (UID: \"995ff9d4-9da8-471e-a696-aefb4ebbf473\") " Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.540339 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "995ff9d4-9da8-471e-a696-aefb4ebbf473" (UID: "995ff9d4-9da8-471e-a696-aefb4ebbf473"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.542400 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "995ff9d4-9da8-471e-a696-aefb4ebbf473" (UID: "995ff9d4-9da8-471e-a696-aefb4ebbf473"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.543792 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-scripts" (OuterVolumeSpecName: "scripts") pod "995ff9d4-9da8-471e-a696-aefb4ebbf473" (UID: "995ff9d4-9da8-471e-a696-aefb4ebbf473"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.553082 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995ff9d4-9da8-471e-a696-aefb4ebbf473-kube-api-access-2ldj6" (OuterVolumeSpecName: "kube-api-access-2ldj6") pod "995ff9d4-9da8-471e-a696-aefb4ebbf473" (UID: "995ff9d4-9da8-471e-a696-aefb4ebbf473"). InnerVolumeSpecName "kube-api-access-2ldj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.578438 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "995ff9d4-9da8-471e-a696-aefb4ebbf473" (UID: "995ff9d4-9da8-471e-a696-aefb4ebbf473"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.611810 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "995ff9d4-9da8-471e-a696-aefb4ebbf473" (UID: "995ff9d4-9da8-471e-a696-aefb4ebbf473"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.636208 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-config-data" (OuterVolumeSpecName: "config-data") pod "995ff9d4-9da8-471e-a696-aefb4ebbf473" (UID: "995ff9d4-9da8-471e-a696-aefb4ebbf473"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.641514 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.641551 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.641560 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.641569 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/995ff9d4-9da8-471e-a696-aefb4ebbf473-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.641578 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.641587 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ldj6\" (UniqueName: \"kubernetes.io/projected/995ff9d4-9da8-471e-a696-aefb4ebbf473-kube-api-access-2ldj6\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:36 crc kubenswrapper[4780]: I1205 07:07:36.641598 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995ff9d4-9da8-471e-a696-aefb4ebbf473-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.381949 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"995ff9d4-9da8-471e-a696-aefb4ebbf473","Type":"ContainerDied","Data":"1b9077dfaeb311f6a15a565961ca83e218259a9c45b418a2bb9443da9fc1a323"} Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.383416 4780 scope.go:117] "RemoveContainer" containerID="cb842b50001bf76fa89c013926f3979f8fdf7ef0abc766b6eed703795fd049a4" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.382114 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.384192 4780 generic.go:334] "Generic (PLEG): container finished" podID="c67223f0-4471-424c-b74d-886cec703c8a" containerID="1149f20bd04bc2a2bf513a262f145e4eb15d251702407c87dc7d603d89e3e28d" exitCode=0 Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.384256 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7kv9n" event={"ID":"c67223f0-4471-424c-b74d-886cec703c8a","Type":"ContainerDied","Data":"1149f20bd04bc2a2bf513a262f145e4eb15d251702407c87dc7d603d89e3e28d"} Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.389991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d68479b85-xqbrx" event={"ID":"e5443f43-c1d5-4563-a28c-63b54fd78ee6","Type":"ContainerStarted","Data":"34148739eeef05370a2f9f987ab32cbec201eca9fad402598ae56efaf7b63ca0"} Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.390053 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.390068 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d68479b85-xqbrx" event={"ID":"e5443f43-c1d5-4563-a28c-63b54fd78ee6","Type":"ContainerStarted","Data":"c64aac9dc2da1feacd133e1cbfed47f07ec40d71d95e0fe650627bb11646f1e3"} Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.390080 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d68479b85-xqbrx" event={"ID":"e5443f43-c1d5-4563-a28c-63b54fd78ee6","Type":"ContainerStarted","Data":"634aecf977af38d2cf9edc0f3e6837609def41508f63a163cb376f4cdf2c0cd2"} Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.425029 4780 scope.go:117] "RemoveContainer" containerID="c51d113e6b9bf551f86be82d2e03a05389e6b696f9ad164dfbb8351b5090597f" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.441667 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d68479b85-xqbrx" podStartSLOduration=2.441648564 podStartE2EDuration="2.441648564s" podCreationTimestamp="2025-12-05 07:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:37.440232316 +0000 UTC m=+1291.509748648" watchObservedRunningTime="2025-12-05 07:07:37.441648564 +0000 UTC m=+1291.511164896" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.447544 4780 scope.go:117] "RemoveContainer" containerID="5a330b49dc443f51f2a618fb3f55aadf083dcb8a1047463013ce92f795d30da6" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.515607 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.530868 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.547608 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:07:37 crc kubenswrapper[4780]: E1205 07:07:37.548053 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="ceilometer-notification-agent" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.548072 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="ceilometer-notification-agent" Dec 05 07:07:37 crc kubenswrapper[4780]: E1205 07:07:37.548082 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="sg-core" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.548088 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="sg-core" Dec 05 07:07:37 crc kubenswrapper[4780]: E1205 07:07:37.548112 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="proxy-httpd" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.548119 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="proxy-httpd" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.548287 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="sg-core" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.548301 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="proxy-httpd" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.548308 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" containerName="ceilometer-notification-agent" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.550051 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.551779 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.552194 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.558682 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.661183 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-run-httpd\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.661316 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-log-httpd\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.661363 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-scripts\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.661412 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95nbn\" (UniqueName: \"kubernetes.io/projected/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-kube-api-access-95nbn\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.661738 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.662064 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-config-data\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.662153 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.764263 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-run-httpd\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.764356 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-log-httpd\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.764379 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-scripts\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.764419 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95nbn\" (UniqueName: \"kubernetes.io/projected/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-kube-api-access-95nbn\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.764457 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.764510 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-config-data\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.764543 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.765864 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-run-httpd\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.766601 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-log-httpd\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.770138 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.771430 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-config-data\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.771982 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-scripts\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.775428 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.783528 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95nbn\" (UniqueName: \"kubernetes.io/projected/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-kube-api-access-95nbn\") pod \"ceilometer-0\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " pod="openstack/ceilometer-0" Dec 05 07:07:37 crc kubenswrapper[4780]: I1205 07:07:37.887521 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.153092 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="995ff9d4-9da8-471e-a696-aefb4ebbf473" path="/var/lib/kubelet/pods/995ff9d4-9da8-471e-a696-aefb4ebbf473/volumes" Dec 05 07:07:38 crc kubenswrapper[4780]: W1205 07:07:38.377050 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2511678c_ce8a_49cb_8ec6_aa2d0717a3d1.slice/crio-50130c6975b2542f762feabc3ad5837cc515bd9042b0a4f0fe9419040f4f78a3 WatchSource:0}: Error finding container 50130c6975b2542f762feabc3ad5837cc515bd9042b0a4f0fe9419040f4f78a3: Status 404 returned error can't find the container with id 50130c6975b2542f762feabc3ad5837cc515bd9042b0a4f0fe9419040f4f78a3 Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.386811 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.398035 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1","Type":"ContainerStarted","Data":"50130c6975b2542f762feabc3ad5837cc515bd9042b0a4f0fe9419040f4f78a3"} Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.778931 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.894090 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-combined-ca-bundle\") pod \"c67223f0-4471-424c-b74d-886cec703c8a\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.894382 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-742f2\" (UniqueName: \"kubernetes.io/projected/c67223f0-4471-424c-b74d-886cec703c8a-kube-api-access-742f2\") pod \"c67223f0-4471-424c-b74d-886cec703c8a\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.894403 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-config-data\") pod \"c67223f0-4471-424c-b74d-886cec703c8a\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.894423 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-db-sync-config-data\") pod \"c67223f0-4471-424c-b74d-886cec703c8a\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.894474 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c67223f0-4471-424c-b74d-886cec703c8a-etc-machine-id\") pod \"c67223f0-4471-424c-b74d-886cec703c8a\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.894504 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-scripts\") pod \"c67223f0-4471-424c-b74d-886cec703c8a\" (UID: \"c67223f0-4471-424c-b74d-886cec703c8a\") " Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.896581 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c67223f0-4471-424c-b74d-886cec703c8a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c67223f0-4471-424c-b74d-886cec703c8a" (UID: "c67223f0-4471-424c-b74d-886cec703c8a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.912397 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c67223f0-4471-424c-b74d-886cec703c8a" (UID: "c67223f0-4471-424c-b74d-886cec703c8a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.915213 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c67223f0-4471-424c-b74d-886cec703c8a-kube-api-access-742f2" (OuterVolumeSpecName: "kube-api-access-742f2") pod "c67223f0-4471-424c-b74d-886cec703c8a" (UID: "c67223f0-4471-424c-b74d-886cec703c8a"). InnerVolumeSpecName "kube-api-access-742f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.919758 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-scripts" (OuterVolumeSpecName: "scripts") pod "c67223f0-4471-424c-b74d-886cec703c8a" (UID: "c67223f0-4471-424c-b74d-886cec703c8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.969829 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c67223f0-4471-424c-b74d-886cec703c8a" (UID: "c67223f0-4471-424c-b74d-886cec703c8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.996589 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.996630 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-742f2\" (UniqueName: \"kubernetes.io/projected/c67223f0-4471-424c-b74d-886cec703c8a-kube-api-access-742f2\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.996644 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.996654 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c67223f0-4471-424c-b74d-886cec703c8a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.996663 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:38 crc kubenswrapper[4780]: I1205 07:07:38.997100 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-config-data" (OuterVolumeSpecName: "config-data") pod "c67223f0-4471-424c-b74d-886cec703c8a" (UID: "c67223f0-4471-424c-b74d-886cec703c8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.097803 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67223f0-4471-424c-b74d-886cec703c8a-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.412382 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1","Type":"ContainerStarted","Data":"c4e630fd33071029b5f5426cfb67b1d809e1ba571bbe5ec446fcb69439bc8db0"} Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.413919 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7kv9n" event={"ID":"c67223f0-4471-424c-b74d-886cec703c8a","Type":"ContainerDied","Data":"0e568bb81bf8491fd1ed16da236217b69aacf9199e164da5e07ac532d3aa70c5"} Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.413959 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e568bb81bf8491fd1ed16da236217b69aacf9199e164da5e07ac532d3aa70c5" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.414015 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7kv9n" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.709182 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:07:39 crc kubenswrapper[4780]: E1205 07:07:39.709574 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c67223f0-4471-424c-b74d-886cec703c8a" containerName="cinder-db-sync" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.709591 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c67223f0-4471-424c-b74d-886cec703c8a" containerName="cinder-db-sync" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.709776 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c67223f0-4471-424c-b74d-886cec703c8a" containerName="cinder-db-sync" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.710784 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.718798 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.719075 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-tcqb6" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.719258 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.719369 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.755096 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.816540 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/197d9af7-8b17-416e-bdc8-4c19a0bb331e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.816600 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-scripts\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.816666 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.816700 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.816739 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stjbh\" (UniqueName: \"kubernetes.io/projected/197d9af7-8b17-416e-bdc8-4c19a0bb331e-kube-api-access-stjbh\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.816832 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.844123 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5768d59dd9-ftl9r"] Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.844480 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" podUID="23dd56ae-e398-46f7-9a63-b5034ae7e76a" containerName="dnsmasq-dns" containerID="cri-o://05573f458b66fc4aa4673c7f8fe2257772c5df4d67adcec98d9ffdcad3c8d2f8" gracePeriod=10 Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.906132 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-svvm6"] Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.907668 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.918146 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.918219 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/197d9af7-8b17-416e-bdc8-4c19a0bb331e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.918240 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-scripts\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.918493 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/197d9af7-8b17-416e-bdc8-4c19a0bb331e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.919091 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.919128 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.919169 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stjbh\" (UniqueName: \"kubernetes.io/projected/197d9af7-8b17-416e-bdc8-4c19a0bb331e-kube-api-access-stjbh\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.923369 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.928615 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-svvm6"] Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.931554 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.943466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-scripts\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.945415 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:39 crc kubenswrapper[4780]: I1205 07:07:39.961282 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stjbh\" (UniqueName: \"kubernetes.io/projected/197d9af7-8b17-416e-bdc8-4c19a0bb331e-kube-api-access-stjbh\") pod \"cinder-scheduler-0\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.020768 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-swift-storage-0\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.020843 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-config\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.020894 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-nb\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.020912 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-sb\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.020935 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-svc\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.020983 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc5zp\" (UniqueName: \"kubernetes.io/projected/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-kube-api-access-fc5zp\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.073402 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.074864 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.081113 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.090381 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.097739 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.125719 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-swift-storage-0\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.125816 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-config\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.125871 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-nb\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.125908 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-sb\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.125933 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-svc\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.125981 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc5zp\" (UniqueName: \"kubernetes.io/projected/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-kube-api-access-fc5zp\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.127312 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-swift-storage-0\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.128775 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-sb\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.129567 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-nb\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.129707 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-svc\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.132050 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-config\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.160291 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc5zp\" (UniqueName: \"kubernetes.io/projected/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-kube-api-access-fc5zp\") pod \"dnsmasq-dns-56d54d44c7-svvm6\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.235302 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b8942c-03bb-4c71-8f15-6b965c01b768-logs\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.235357 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.235447 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57b8942c-03bb-4c71-8f15-6b965c01b768-etc-machine-id\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.235468 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpx5\" (UniqueName: \"kubernetes.io/projected/57b8942c-03bb-4c71-8f15-6b965c01b768-kube-api-access-2fpx5\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.235494 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data-custom\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.235537 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-scripts\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.235556 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.298915 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.337950 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57b8942c-03bb-4c71-8f15-6b965c01b768-etc-machine-id\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.337991 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fpx5\" (UniqueName: \"kubernetes.io/projected/57b8942c-03bb-4c71-8f15-6b965c01b768-kube-api-access-2fpx5\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.338020 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data-custom\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.338045 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-scripts\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.338083 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.338114 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b8942c-03bb-4c71-8f15-6b965c01b768-logs\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.338146 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.340292 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57b8942c-03bb-4c71-8f15-6b965c01b768-etc-machine-id\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.340446 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b8942c-03bb-4c71-8f15-6b965c01b768-logs\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.347962 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data-custom\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.351215 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-scripts\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.352560 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.364911 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.365466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fpx5\" (UniqueName: \"kubernetes.io/projected/57b8942c-03bb-4c71-8f15-6b965c01b768-kube-api-access-2fpx5\") pod \"cinder-api-0\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.501364 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1","Type":"ContainerStarted","Data":"04a88cbf99864ec61d56540fc895f889dc497aea74b46503ef4f4d4291789132"} Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.515962 4780 generic.go:334] "Generic (PLEG): container finished" podID="23dd56ae-e398-46f7-9a63-b5034ae7e76a" containerID="05573f458b66fc4aa4673c7f8fe2257772c5df4d67adcec98d9ffdcad3c8d2f8" exitCode=0 Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.516010 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" event={"ID":"23dd56ae-e398-46f7-9a63-b5034ae7e76a","Type":"ContainerDied","Data":"05573f458b66fc4aa4673c7f8fe2257772c5df4d67adcec98d9ffdcad3c8d2f8"} Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.598268 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.635550 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.649360 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-config\") pod \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.649410 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-svc\") pod \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.649453 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2df7k\" (UniqueName: \"kubernetes.io/projected/23dd56ae-e398-46f7-9a63-b5034ae7e76a-kube-api-access-2df7k\") pod \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.649515 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-sb\") pod \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.649554 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-swift-storage-0\") pod \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.649642 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-nb\") pod \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\" (UID: \"23dd56ae-e398-46f7-9a63-b5034ae7e76a\") " Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.708060 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23dd56ae-e398-46f7-9a63-b5034ae7e76a-kube-api-access-2df7k" (OuterVolumeSpecName: "kube-api-access-2df7k") pod "23dd56ae-e398-46f7-9a63-b5034ae7e76a" (UID: "23dd56ae-e398-46f7-9a63-b5034ae7e76a"). InnerVolumeSpecName "kube-api-access-2df7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.756980 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2df7k\" (UniqueName: \"kubernetes.io/projected/23dd56ae-e398-46f7-9a63-b5034ae7e76a-kube-api-access-2df7k\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.760554 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23dd56ae-e398-46f7-9a63-b5034ae7e76a" (UID: "23dd56ae-e398-46f7-9a63-b5034ae7e76a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.777172 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23dd56ae-e398-46f7-9a63-b5034ae7e76a" (UID: "23dd56ae-e398-46f7-9a63-b5034ae7e76a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.785934 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.807742 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23dd56ae-e398-46f7-9a63-b5034ae7e76a" (UID: "23dd56ae-e398-46f7-9a63-b5034ae7e76a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.853168 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-config" (OuterVolumeSpecName: "config") pod "23dd56ae-e398-46f7-9a63-b5034ae7e76a" (UID: "23dd56ae-e398-46f7-9a63-b5034ae7e76a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.861328 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.861359 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.861397 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.861405 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.865335 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23dd56ae-e398-46f7-9a63-b5034ae7e76a" (UID: "23dd56ae-e398-46f7-9a63-b5034ae7e76a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:40 crc kubenswrapper[4780]: I1205 07:07:40.992578 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23dd56ae-e398-46f7-9a63-b5034ae7e76a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.093169 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-svvm6"] Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.376432 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:07:41 crc kubenswrapper[4780]: W1205 07:07:41.379862 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57b8942c_03bb_4c71_8f15_6b965c01b768.slice/crio-1075a2fb28ff56f18a91b0da8104b6d0db6dc1f3ccd8488a6be9493591c82d67 WatchSource:0}: Error finding container 1075a2fb28ff56f18a91b0da8104b6d0db6dc1f3ccd8488a6be9493591c82d67: Status 404 returned error can't find the container with id 1075a2fb28ff56f18a91b0da8104b6d0db6dc1f3ccd8488a6be9493591c82d67 Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.533002 4780 generic.go:334] "Generic (PLEG): container finished" podID="3d187a7e-2376-4b39-84b2-73ecfa0b15bf" containerID="fef6e722fddda372925f1a55f421f2982a68362d125bb6bf864ed9052243417f" exitCode=0 Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.533089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" event={"ID":"3d187a7e-2376-4b39-84b2-73ecfa0b15bf","Type":"ContainerDied","Data":"fef6e722fddda372925f1a55f421f2982a68362d125bb6bf864ed9052243417f"} Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.533155 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" event={"ID":"3d187a7e-2376-4b39-84b2-73ecfa0b15bf","Type":"ContainerStarted","Data":"d19289d49d355c5d362ab535e485b508e79bc9f75d724cf5ba09f5875e705d19"} Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.536820 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57b8942c-03bb-4c71-8f15-6b965c01b768","Type":"ContainerStarted","Data":"1075a2fb28ff56f18a91b0da8104b6d0db6dc1f3ccd8488a6be9493591c82d67"} Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.547556 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1","Type":"ContainerStarted","Data":"826efb8fd7f678263624359df77ca36e1fc394deed96889a21c75cb0214e446e"} Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.557960 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"197d9af7-8b17-416e-bdc8-4c19a0bb331e","Type":"ContainerStarted","Data":"ed99312829ec7078fd3c8cef45d2af95309f40d6eb058046f23894ad6113715f"} Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.565051 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" event={"ID":"23dd56ae-e398-46f7-9a63-b5034ae7e76a","Type":"ContainerDied","Data":"4af34759bbdc9fd4c8c47747a18f7ea455240573f9e1ecd2eac399ee2be76f2c"} Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.565101 4780 scope.go:117] "RemoveContainer" containerID="05573f458b66fc4aa4673c7f8fe2257772c5df4d67adcec98d9ffdcad3c8d2f8" Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.565295 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5768d59dd9-ftl9r" Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.693053 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5768d59dd9-ftl9r"] Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.701142 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5768d59dd9-ftl9r"] Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.752283 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:41 crc kubenswrapper[4780]: I1205 07:07:41.832202 4780 scope.go:117] "RemoveContainer" containerID="14389c2b6f46f94fad0145e597561b5dc143a37d762e36a2015ea5c4fb6e87ea" Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.153102 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23dd56ae-e398-46f7-9a63-b5034ae7e76a" path="/var/lib/kubelet/pods/23dd56ae-e398-46f7-9a63-b5034ae7e76a/volumes" Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.393793 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.489222 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d77f6ccb8-bx2bz"] Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.490287 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api" containerID="cri-o://8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252" gracePeriod=30 Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.490780 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api-log" containerID="cri-o://4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3" gracePeriod=30 Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.529048 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": EOF" Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.537087 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": EOF" Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.537313 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": EOF" Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.537687 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": EOF" Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.621288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" event={"ID":"3d187a7e-2376-4b39-84b2-73ecfa0b15bf","Type":"ContainerStarted","Data":"a2dee3018e38265f1fab81663bc435e93805201a6800848b3cb8d8282d2f7c3a"} Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.623264 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.656987 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57b8942c-03bb-4c71-8f15-6b965c01b768","Type":"ContainerStarted","Data":"45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3"} Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.701771 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" podStartSLOduration=3.701750005 podStartE2EDuration="3.701750005s" podCreationTimestamp="2025-12-05 07:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:42.686430302 +0000 UTC m=+1296.755946634" watchObservedRunningTime="2025-12-05 07:07:42.701750005 +0000 UTC m=+1296.771266337" Dec 05 07:07:42 crc kubenswrapper[4780]: I1205 07:07:42.827136 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:07:43 crc kubenswrapper[4780]: I1205 07:07:43.740021 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1","Type":"ContainerStarted","Data":"13da505ed3a2caa0fefbf684b14d22b8421b52b40329cf917b008fd0be6b7229"} Dec 05 07:07:43 crc kubenswrapper[4780]: I1205 07:07:43.740515 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 07:07:43 crc kubenswrapper[4780]: I1205 07:07:43.763093 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"197d9af7-8b17-416e-bdc8-4c19a0bb331e","Type":"ContainerStarted","Data":"8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9"} Dec 05 07:07:43 crc kubenswrapper[4780]: I1205 07:07:43.777237 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.842680798 podStartE2EDuration="6.777218787s" podCreationTimestamp="2025-12-05 07:07:37 +0000 UTC" firstStartedPulling="2025-12-05 07:07:38.379361648 +0000 UTC m=+1292.448877980" lastFinishedPulling="2025-12-05 07:07:42.313899637 +0000 UTC m=+1296.383415969" observedRunningTime="2025-12-05 07:07:43.773377663 +0000 UTC m=+1297.842894005" watchObservedRunningTime="2025-12-05 07:07:43.777218787 +0000 UTC m=+1297.846735119" Dec 05 07:07:43 crc kubenswrapper[4780]: I1205 07:07:43.789355 4780 generic.go:334] "Generic (PLEG): container finished" podID="9b4ea088-089a-4945-b914-c58bfec9c403" containerID="4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3" exitCode=143 Dec 05 07:07:43 crc kubenswrapper[4780]: I1205 07:07:43.791007 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d77f6ccb8-bx2bz" event={"ID":"9b4ea088-089a-4945-b914-c58bfec9c403","Type":"ContainerDied","Data":"4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3"} Dec 05 07:07:44 crc kubenswrapper[4780]: I1205 07:07:44.804082 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57b8942c-03bb-4c71-8f15-6b965c01b768","Type":"ContainerStarted","Data":"4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece"} Dec 05 07:07:44 crc kubenswrapper[4780]: I1205 07:07:44.804960 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="57b8942c-03bb-4c71-8f15-6b965c01b768" containerName="cinder-api-log" containerID="cri-o://45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3" gracePeriod=30 Dec 05 07:07:44 crc kubenswrapper[4780]: I1205 07:07:44.805221 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 07:07:44 crc kubenswrapper[4780]: I1205 07:07:44.805537 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="57b8942c-03bb-4c71-8f15-6b965c01b768" containerName="cinder-api" containerID="cri-o://4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece" gracePeriod=30 Dec 05 07:07:44 crc kubenswrapper[4780]: I1205 07:07:44.812145 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"197d9af7-8b17-416e-bdc8-4c19a0bb331e","Type":"ContainerStarted","Data":"ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62"} Dec 05 07:07:44 crc kubenswrapper[4780]: I1205 07:07:44.832019 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.8320021539999995 podStartE2EDuration="4.832002154s" podCreationTimestamp="2025-12-05 07:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:44.826343222 +0000 UTC m=+1298.895859564" watchObservedRunningTime="2025-12-05 07:07:44.832002154 +0000 UTC m=+1298.901518486" Dec 05 07:07:44 crc kubenswrapper[4780]: I1205 07:07:44.856700 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.8435646420000005 podStartE2EDuration="5.85668299s" podCreationTimestamp="2025-12-05 07:07:39 +0000 UTC" firstStartedPulling="2025-12-05 07:07:40.888808601 +0000 UTC m=+1294.958324943" lastFinishedPulling="2025-12-05 07:07:41.901926959 +0000 UTC m=+1295.971443291" observedRunningTime="2025-12-05 07:07:44.850147664 +0000 UTC m=+1298.919663996" watchObservedRunningTime="2025-12-05 07:07:44.85668299 +0000 UTC m=+1298.926199322" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.078234 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.434562 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.516957 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57b8942c-03bb-4c71-8f15-6b965c01b768-etc-machine-id\") pod \"57b8942c-03bb-4c71-8f15-6b965c01b768\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.517066 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data-custom\") pod \"57b8942c-03bb-4c71-8f15-6b965c01b768\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.517105 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data\") pod \"57b8942c-03bb-4c71-8f15-6b965c01b768\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.517125 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-combined-ca-bundle\") pod \"57b8942c-03bb-4c71-8f15-6b965c01b768\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.517115 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57b8942c-03bb-4c71-8f15-6b965c01b768-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "57b8942c-03bb-4c71-8f15-6b965c01b768" (UID: "57b8942c-03bb-4c71-8f15-6b965c01b768"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.517190 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fpx5\" (UniqueName: \"kubernetes.io/projected/57b8942c-03bb-4c71-8f15-6b965c01b768-kube-api-access-2fpx5\") pod \"57b8942c-03bb-4c71-8f15-6b965c01b768\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.517295 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-scripts\") pod \"57b8942c-03bb-4c71-8f15-6b965c01b768\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.517316 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b8942c-03bb-4c71-8f15-6b965c01b768-logs\") pod \"57b8942c-03bb-4c71-8f15-6b965c01b768\" (UID: \"57b8942c-03bb-4c71-8f15-6b965c01b768\") " Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.517756 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57b8942c-03bb-4c71-8f15-6b965c01b768-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.518101 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b8942c-03bb-4c71-8f15-6b965c01b768-logs" (OuterVolumeSpecName: "logs") pod "57b8942c-03bb-4c71-8f15-6b965c01b768" (UID: "57b8942c-03bb-4c71-8f15-6b965c01b768"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.525553 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b8942c-03bb-4c71-8f15-6b965c01b768-kube-api-access-2fpx5" (OuterVolumeSpecName: "kube-api-access-2fpx5") pod "57b8942c-03bb-4c71-8f15-6b965c01b768" (UID: "57b8942c-03bb-4c71-8f15-6b965c01b768"). InnerVolumeSpecName "kube-api-access-2fpx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.525620 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-scripts" (OuterVolumeSpecName: "scripts") pod "57b8942c-03bb-4c71-8f15-6b965c01b768" (UID: "57b8942c-03bb-4c71-8f15-6b965c01b768"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.542094 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57b8942c-03bb-4c71-8f15-6b965c01b768" (UID: "57b8942c-03bb-4c71-8f15-6b965c01b768"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.554054 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57b8942c-03bb-4c71-8f15-6b965c01b768" (UID: "57b8942c-03bb-4c71-8f15-6b965c01b768"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.583066 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data" (OuterVolumeSpecName: "config-data") pod "57b8942c-03bb-4c71-8f15-6b965c01b768" (UID: "57b8942c-03bb-4c71-8f15-6b965c01b768"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.619779 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.620063 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b8942c-03bb-4c71-8f15-6b965c01b768-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.620164 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.620291 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.620364 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b8942c-03bb-4c71-8f15-6b965c01b768-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.620442 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fpx5\" (UniqueName: \"kubernetes.io/projected/57b8942c-03bb-4c71-8f15-6b965c01b768-kube-api-access-2fpx5\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.821624 4780 generic.go:334] "Generic (PLEG): container finished" podID="57b8942c-03bb-4c71-8f15-6b965c01b768" containerID="4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece" exitCode=0 Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.821666 4780 generic.go:334] "Generic (PLEG): container finished" podID="57b8942c-03bb-4c71-8f15-6b965c01b768" containerID="45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3" exitCode=143 Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.821715 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57b8942c-03bb-4c71-8f15-6b965c01b768","Type":"ContainerDied","Data":"4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece"} Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.821769 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57b8942c-03bb-4c71-8f15-6b965c01b768","Type":"ContainerDied","Data":"45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3"} Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.821781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"57b8942c-03bb-4c71-8f15-6b965c01b768","Type":"ContainerDied","Data":"1075a2fb28ff56f18a91b0da8104b6d0db6dc1f3ccd8488a6be9493591c82d67"} Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.821797 4780 scope.go:117] "RemoveContainer" containerID="4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.821725 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.863280 4780 scope.go:117] "RemoveContainer" containerID="45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.876111 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.893515 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.903469 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:07:45 crc kubenswrapper[4780]: E1205 07:07:45.904075 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23dd56ae-e398-46f7-9a63-b5034ae7e76a" containerName="dnsmasq-dns" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.904095 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="23dd56ae-e398-46f7-9a63-b5034ae7e76a" containerName="dnsmasq-dns" Dec 05 07:07:45 crc kubenswrapper[4780]: E1205 07:07:45.904127 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b8942c-03bb-4c71-8f15-6b965c01b768" containerName="cinder-api-log" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.904135 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b8942c-03bb-4c71-8f15-6b965c01b768" containerName="cinder-api-log" Dec 05 07:07:45 crc kubenswrapper[4780]: E1205 07:07:45.904162 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b8942c-03bb-4c71-8f15-6b965c01b768" containerName="cinder-api" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.904169 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b8942c-03bb-4c71-8f15-6b965c01b768" containerName="cinder-api" Dec 05 07:07:45 crc kubenswrapper[4780]: E1205 07:07:45.904181 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23dd56ae-e398-46f7-9a63-b5034ae7e76a" containerName="init" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.904188 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="23dd56ae-e398-46f7-9a63-b5034ae7e76a" containerName="init" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.904395 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b8942c-03bb-4c71-8f15-6b965c01b768" containerName="cinder-api" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.904445 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="23dd56ae-e398-46f7-9a63-b5034ae7e76a" containerName="dnsmasq-dns" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.904462 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b8942c-03bb-4c71-8f15-6b965c01b768" containerName="cinder-api-log" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.905822 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.908194 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.908945 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.909867 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.910219 4780 scope.go:117] "RemoveContainer" containerID="4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece" Dec 05 07:07:45 crc kubenswrapper[4780]: E1205 07:07:45.911383 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece\": container with ID starting with 4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece not found: ID does not exist" containerID="4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.911420 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece"} err="failed to get container status \"4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece\": rpc error: code = NotFound desc = could not find container \"4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece\": container with ID starting with 4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece not found: ID does not exist" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.911446 4780 scope.go:117] "RemoveContainer" containerID="45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3" Dec 05 07:07:45 crc kubenswrapper[4780]: E1205 07:07:45.911865 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3\": container with ID starting with 45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3 not found: ID does not exist" containerID="45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.911977 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3"} err="failed to get container status \"45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3\": rpc error: code = NotFound desc = could not find container \"45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3\": container with ID starting with 45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3 not found: ID does not exist" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.912023 4780 scope.go:117] "RemoveContainer" containerID="4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.912951 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece"} err="failed to get container status \"4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece\": rpc error: code = NotFound desc = could not find container \"4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece\": container with ID starting with 4100b85c544691536cb30f977fe1650fcf56e5f02347bd72dbc36cf446a3cece not found: ID does not exist" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.912985 4780 scope.go:117] "RemoveContainer" containerID="45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.913356 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3"} err="failed to get container status \"45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3\": rpc error: code = NotFound desc = could not find container \"45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3\": container with ID starting with 45c02adbdc7d8102fd5265b6611d8453b58ef04c3c6a5e7786d42fbaa037c4a3 not found: ID does not exist" Dec 05 07:07:45 crc kubenswrapper[4780]: I1205 07:07:45.921773 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.026744 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.026830 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.027123 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.027167 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.027234 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whpfj\" (UniqueName: \"kubernetes.io/projected/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-kube-api-access-whpfj\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.027305 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-scripts\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.027383 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data-custom\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.027436 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.027508 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-logs\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.129507 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whpfj\" (UniqueName: \"kubernetes.io/projected/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-kube-api-access-whpfj\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.129579 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-scripts\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.129614 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data-custom\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.129646 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.129682 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-logs\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.129732 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.129763 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.129826 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.129848 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.130280 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-logs\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.130331 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.144165 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-scripts\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.144679 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.145047 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data-custom\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.145228 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.145549 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.149223 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.150054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whpfj\" (UniqueName: \"kubernetes.io/projected/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-kube-api-access-whpfj\") pod \"cinder-api-0\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.157641 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b8942c-03bb-4c71-8f15-6b965c01b768" path="/var/lib/kubelet/pods/57b8942c-03bb-4c71-8f15-6b965c01b768/volumes" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.312423 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.790454 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:07:46 crc kubenswrapper[4780]: W1205 07:07:46.794448 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf87b821_f0c0_41df_a1ee_f2c44a09cc82.slice/crio-d97fe25a6142b5642b1114e6fd451ca14ff528856917d016baa9dc2ce96c7adc WatchSource:0}: Error finding container d97fe25a6142b5642b1114e6fd451ca14ff528856917d016baa9dc2ce96c7adc: Status 404 returned error can't find the container with id d97fe25a6142b5642b1114e6fd451ca14ff528856917d016baa9dc2ce96c7adc Dec 05 07:07:46 crc kubenswrapper[4780]: I1205 07:07:46.838299 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf87b821-f0c0-41df-a1ee-f2c44a09cc82","Type":"ContainerStarted","Data":"d97fe25a6142b5642b1114e6fd451ca14ff528856917d016baa9dc2ce96c7adc"} Dec 05 07:07:47 crc kubenswrapper[4780]: I1205 07:07:47.622129 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 07:07:47 crc kubenswrapper[4780]: I1205 07:07:47.622175 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 07:07:47 crc kubenswrapper[4780]: I1205 07:07:47.850194 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf87b821-f0c0-41df-a1ee-f2c44a09cc82","Type":"ContainerStarted","Data":"672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156"} Dec 05 07:07:47 crc kubenswrapper[4780]: I1205 07:07:47.970332 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:49186->10.217.0.154:9311: read: connection reset by peer" Dec 05 07:07:47 crc kubenswrapper[4780]: I1205 07:07:47.970658 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d77f6ccb8-bx2bz" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:49202->10.217.0.154:9311: read: connection reset by peer" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.523898 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.581361 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq2ls\" (UniqueName: \"kubernetes.io/projected/9b4ea088-089a-4945-b914-c58bfec9c403-kube-api-access-pq2ls\") pod \"9b4ea088-089a-4945-b914-c58bfec9c403\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.581710 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data-custom\") pod \"9b4ea088-089a-4945-b914-c58bfec9c403\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.581927 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data\") pod \"9b4ea088-089a-4945-b914-c58bfec9c403\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.582167 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-combined-ca-bundle\") pod \"9b4ea088-089a-4945-b914-c58bfec9c403\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.582280 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4ea088-089a-4945-b914-c58bfec9c403-logs\") pod \"9b4ea088-089a-4945-b914-c58bfec9c403\" (UID: \"9b4ea088-089a-4945-b914-c58bfec9c403\") " Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.583093 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4ea088-089a-4945-b914-c58bfec9c403-logs" (OuterVolumeSpecName: "logs") pod "9b4ea088-089a-4945-b914-c58bfec9c403" (UID: "9b4ea088-089a-4945-b914-c58bfec9c403"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.583934 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4ea088-089a-4945-b914-c58bfec9c403-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.587644 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4ea088-089a-4945-b914-c58bfec9c403-kube-api-access-pq2ls" (OuterVolumeSpecName: "kube-api-access-pq2ls") pod "9b4ea088-089a-4945-b914-c58bfec9c403" (UID: "9b4ea088-089a-4945-b914-c58bfec9c403"). InnerVolumeSpecName "kube-api-access-pq2ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.587713 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b4ea088-089a-4945-b914-c58bfec9c403" (UID: "9b4ea088-089a-4945-b914-c58bfec9c403"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.619797 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b4ea088-089a-4945-b914-c58bfec9c403" (UID: "9b4ea088-089a-4945-b914-c58bfec9c403"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.636611 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data" (OuterVolumeSpecName: "config-data") pod "9b4ea088-089a-4945-b914-c58bfec9c403" (UID: "9b4ea088-089a-4945-b914-c58bfec9c403"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.686172 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq2ls\" (UniqueName: \"kubernetes.io/projected/9b4ea088-089a-4945-b914-c58bfec9c403-kube-api-access-pq2ls\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.686205 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.686217 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.686226 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4ea088-089a-4945-b914-c58bfec9c403-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.863659 4780 generic.go:334] "Generic (PLEG): container finished" podID="9b4ea088-089a-4945-b914-c58bfec9c403" containerID="8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252" exitCode=0 Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.863733 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d77f6ccb8-bx2bz" event={"ID":"9b4ea088-089a-4945-b914-c58bfec9c403","Type":"ContainerDied","Data":"8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252"} Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.863766 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d77f6ccb8-bx2bz" event={"ID":"9b4ea088-089a-4945-b914-c58bfec9c403","Type":"ContainerDied","Data":"19bd9e1949876468ebd118cfcb26979b198a5fe8f3da1eede77925521a66b7ba"} Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.863786 4780 scope.go:117] "RemoveContainer" containerID="8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.863815 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d77f6ccb8-bx2bz" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.868453 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf87b821-f0c0-41df-a1ee-f2c44a09cc82","Type":"ContainerStarted","Data":"fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb"} Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.869672 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.895010 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.894983576 podStartE2EDuration="3.894983576s" podCreationTimestamp="2025-12-05 07:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:48.889441246 +0000 UTC m=+1302.958957588" watchObservedRunningTime="2025-12-05 07:07:48.894983576 +0000 UTC m=+1302.964499908" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.910432 4780 scope.go:117] "RemoveContainer" containerID="4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.940344 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d77f6ccb8-bx2bz"] Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.950366 4780 scope.go:117] "RemoveContainer" containerID="8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252" Dec 05 07:07:48 crc kubenswrapper[4780]: E1205 07:07:48.951080 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252\": container with ID starting with 8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252 not found: ID does not exist" containerID="8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.951135 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252"} err="failed to get container status \"8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252\": rpc error: code = NotFound desc = could not find container \"8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252\": container with ID starting with 8240ba99e2ba764fbc1310326eada083906a99b5c717e06d00053a46d0c21252 not found: ID does not exist" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.951166 4780 scope.go:117] "RemoveContainer" containerID="4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3" Dec 05 07:07:48 crc kubenswrapper[4780]: E1205 07:07:48.951477 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3\": container with ID starting with 4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3 not found: ID does not exist" containerID="4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.951596 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3"} err="failed to get container status \"4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3\": rpc error: code = NotFound desc = could not find container \"4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3\": container with ID starting with 4ca94ee0b89be93ede65a9ab333f30bd883c9155a09cf187a772ada533a52ff3 not found: ID does not exist" Dec 05 07:07:48 crc kubenswrapper[4780]: I1205 07:07:48.954082 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d77f6ccb8-bx2bz"] Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.151847 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" path="/var/lib/kubelet/pods/9b4ea088-089a-4945-b914-c58bfec9c403/volumes" Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.301040 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.312517 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.375188 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.426861 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79464d554c-c6b8s"] Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.427298 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" podUID="48088846-ad19-4031-b988-4825d14f503f" containerName="dnsmasq-dns" containerID="cri-o://db7f8ad925a6d723dd40ba017c82e5d901b4fc6f29ab6babf1ed5e72d4d8a760" gracePeriod=10 Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.893126 4780 generic.go:334] "Generic (PLEG): container finished" podID="48088846-ad19-4031-b988-4825d14f503f" containerID="db7f8ad925a6d723dd40ba017c82e5d901b4fc6f29ab6babf1ed5e72d4d8a760" exitCode=0 Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.893193 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" event={"ID":"48088846-ad19-4031-b988-4825d14f503f","Type":"ContainerDied","Data":"db7f8ad925a6d723dd40ba017c82e5d901b4fc6f29ab6babf1ed5e72d4d8a760"} Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.893420 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" event={"ID":"48088846-ad19-4031-b988-4825d14f503f","Type":"ContainerDied","Data":"7d83678b73b6c6f0e033a6ded2fd4c7d1e3e12d87aeaaf7d743cdd5ff572b301"} Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.893439 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d83678b73b6c6f0e033a6ded2fd4c7d1e3e12d87aeaaf7d743cdd5ff572b301" Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.893572 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" containerName="cinder-scheduler" containerID="cri-o://8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9" gracePeriod=30 Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.893663 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" containerName="probe" containerID="cri-o://ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62" gracePeriod=30 Dec 05 07:07:50 crc kubenswrapper[4780]: I1205 07:07:50.925704 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.038841 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-swift-storage-0\") pod \"48088846-ad19-4031-b988-4825d14f503f\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.039349 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl6kn\" (UniqueName: \"kubernetes.io/projected/48088846-ad19-4031-b988-4825d14f503f-kube-api-access-zl6kn\") pod \"48088846-ad19-4031-b988-4825d14f503f\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.039455 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-svc\") pod \"48088846-ad19-4031-b988-4825d14f503f\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.039666 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-sb\") pod \"48088846-ad19-4031-b988-4825d14f503f\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.039753 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-config\") pod \"48088846-ad19-4031-b988-4825d14f503f\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.039817 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-nb\") pod \"48088846-ad19-4031-b988-4825d14f503f\" (UID: \"48088846-ad19-4031-b988-4825d14f503f\") " Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.047664 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48088846-ad19-4031-b988-4825d14f503f-kube-api-access-zl6kn" (OuterVolumeSpecName: "kube-api-access-zl6kn") pod "48088846-ad19-4031-b988-4825d14f503f" (UID: "48088846-ad19-4031-b988-4825d14f503f"). InnerVolumeSpecName "kube-api-access-zl6kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.095235 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48088846-ad19-4031-b988-4825d14f503f" (UID: "48088846-ad19-4031-b988-4825d14f503f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.097025 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-config" (OuterVolumeSpecName: "config") pod "48088846-ad19-4031-b988-4825d14f503f" (UID: "48088846-ad19-4031-b988-4825d14f503f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.099532 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48088846-ad19-4031-b988-4825d14f503f" (UID: "48088846-ad19-4031-b988-4825d14f503f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.100950 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "48088846-ad19-4031-b988-4825d14f503f" (UID: "48088846-ad19-4031-b988-4825d14f503f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.105415 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48088846-ad19-4031-b988-4825d14f503f" (UID: "48088846-ad19-4031-b988-4825d14f503f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.143048 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl6kn\" (UniqueName: \"kubernetes.io/projected/48088846-ad19-4031-b988-4825d14f503f-kube-api-access-zl6kn\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.143088 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.143103 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.143116 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.143130 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.143144 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48088846-ad19-4031-b988-4825d14f503f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.902855 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79464d554c-c6b8s" Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.966594 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79464d554c-c6b8s"] Dec 05 07:07:51 crc kubenswrapper[4780]: I1205 07:07:51.975630 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79464d554c-c6b8s"] Dec 05 07:07:52 crc kubenswrapper[4780]: I1205 07:07:52.148187 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48088846-ad19-4031-b988-4825d14f503f" path="/var/lib/kubelet/pods/48088846-ad19-4031-b988-4825d14f503f/volumes" Dec 05 07:07:52 crc kubenswrapper[4780]: I1205 07:07:52.917617 4780 generic.go:334] "Generic (PLEG): container finished" podID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" containerID="ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62" exitCode=0 Dec 05 07:07:52 crc kubenswrapper[4780]: I1205 07:07:52.917691 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"197d9af7-8b17-416e-bdc8-4c19a0bb331e","Type":"ContainerDied","Data":"ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62"} Dec 05 07:07:54 crc kubenswrapper[4780]: I1205 07:07:54.903152 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 07:07:54 crc kubenswrapper[4780]: I1205 07:07:54.944274 4780 generic.go:334] "Generic (PLEG): container finished" podID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" containerID="8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9" exitCode=0 Dec 05 07:07:54 crc kubenswrapper[4780]: I1205 07:07:54.944328 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"197d9af7-8b17-416e-bdc8-4c19a0bb331e","Type":"ContainerDied","Data":"8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9"} Dec 05 07:07:54 crc kubenswrapper[4780]: I1205 07:07:54.944360 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"197d9af7-8b17-416e-bdc8-4c19a0bb331e","Type":"ContainerDied","Data":"ed99312829ec7078fd3c8cef45d2af95309f40d6eb058046f23894ad6113715f"} Dec 05 07:07:54 crc kubenswrapper[4780]: I1205 07:07:54.944383 4780 scope.go:117] "RemoveContainer" containerID="ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62" Dec 05 07:07:54 crc kubenswrapper[4780]: I1205 07:07:54.944520 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 07:07:54 crc kubenswrapper[4780]: I1205 07:07:54.975999 4780 scope.go:117] "RemoveContainer" containerID="8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9" Dec 05 07:07:54 crc kubenswrapper[4780]: I1205 07:07:54.977436 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.014597 4780 scope.go:117] "RemoveContainer" containerID="ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62" Dec 05 07:07:55 crc kubenswrapper[4780]: E1205 07:07:55.028193 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62\": container with ID starting with ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62 not found: ID does not exist" containerID="ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.028267 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62"} err="failed to get container status \"ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62\": rpc error: code = NotFound desc = could not find container \"ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62\": container with ID starting with ec19e29060271cc2842b3d46f53a9c6ec3c12ed13b55211f0640c91166d88c62 not found: ID does not exist" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.028307 4780 scope.go:117] "RemoveContainer" containerID="8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9" Dec 05 07:07:55 crc kubenswrapper[4780]: E1205 07:07:55.028960 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9\": container with ID starting with 8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9 not found: ID does not exist" containerID="8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.029017 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9"} err="failed to get container status \"8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9\": rpc error: code = NotFound desc = could not find container \"8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9\": container with ID starting with 8d5184574a6f52c74852c35a52b451587fc8acf46c33084acaae77b94b3b2df9 not found: ID does not exist" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.040941 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/197d9af7-8b17-416e-bdc8-4c19a0bb331e-etc-machine-id\") pod \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.041109 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/197d9af7-8b17-416e-bdc8-4c19a0bb331e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "197d9af7-8b17-416e-bdc8-4c19a0bb331e" (UID: "197d9af7-8b17-416e-bdc8-4c19a0bb331e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.041210 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-scripts\") pod \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.041283 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data\") pod \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.041443 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data-custom\") pod \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.041552 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-combined-ca-bundle\") pod \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.041603 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stjbh\" (UniqueName: \"kubernetes.io/projected/197d9af7-8b17-416e-bdc8-4c19a0bb331e-kube-api-access-stjbh\") pod \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\" (UID: \"197d9af7-8b17-416e-bdc8-4c19a0bb331e\") " Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.042294 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/197d9af7-8b17-416e-bdc8-4c19a0bb331e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.048969 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-scripts" (OuterVolumeSpecName: "scripts") pod "197d9af7-8b17-416e-bdc8-4c19a0bb331e" (UID: "197d9af7-8b17-416e-bdc8-4c19a0bb331e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.052028 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "197d9af7-8b17-416e-bdc8-4c19a0bb331e" (UID: "197d9af7-8b17-416e-bdc8-4c19a0bb331e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.061553 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/197d9af7-8b17-416e-bdc8-4c19a0bb331e-kube-api-access-stjbh" (OuterVolumeSpecName: "kube-api-access-stjbh") pod "197d9af7-8b17-416e-bdc8-4c19a0bb331e" (UID: "197d9af7-8b17-416e-bdc8-4c19a0bb331e"). InnerVolumeSpecName "kube-api-access-stjbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.100560 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "197d9af7-8b17-416e-bdc8-4c19a0bb331e" (UID: "197d9af7-8b17-416e-bdc8-4c19a0bb331e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.143948 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.143983 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.143993 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.144004 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stjbh\" (UniqueName: \"kubernetes.io/projected/197d9af7-8b17-416e-bdc8-4c19a0bb331e-kube-api-access-stjbh\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.170781 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data" (OuterVolumeSpecName: "config-data") pod "197d9af7-8b17-416e-bdc8-4c19a0bb331e" (UID: "197d9af7-8b17-416e-bdc8-4c19a0bb331e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.245794 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197d9af7-8b17-416e-bdc8-4c19a0bb331e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.279088 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.287739 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.305613 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:07:55 crc kubenswrapper[4780]: E1205 07:07:55.306054 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" containerName="cinder-scheduler" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306074 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" containerName="cinder-scheduler" Dec 05 07:07:55 crc kubenswrapper[4780]: E1205 07:07:55.306098 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48088846-ad19-4031-b988-4825d14f503f" containerName="dnsmasq-dns" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306108 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="48088846-ad19-4031-b988-4825d14f503f" containerName="dnsmasq-dns" Dec 05 07:07:55 crc kubenswrapper[4780]: E1205 07:07:55.306128 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306135 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api" Dec 05 07:07:55 crc kubenswrapper[4780]: E1205 07:07:55.306157 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api-log" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306163 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api-log" Dec 05 07:07:55 crc kubenswrapper[4780]: E1205 07:07:55.306180 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" containerName="probe" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306187 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" containerName="probe" Dec 05 07:07:55 crc kubenswrapper[4780]: E1205 07:07:55.306201 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48088846-ad19-4031-b988-4825d14f503f" containerName="init" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306210 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="48088846-ad19-4031-b988-4825d14f503f" containerName="init" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306413 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306431 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4ea088-089a-4945-b914-c58bfec9c403" containerName="barbican-api-log" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306442 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" containerName="cinder-scheduler" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306450 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="48088846-ad19-4031-b988-4825d14f503f" containerName="dnsmasq-dns" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.306463 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" containerName="probe" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.307544 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.309758 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.320527 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.449275 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9395104-b579-44d5-bbf0-69fe4d17406d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.449333 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.449395 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.449433 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.449474 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js7f6\" (UniqueName: \"kubernetes.io/projected/e9395104-b579-44d5-bbf0-69fe4d17406d-kube-api-access-js7f6\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.449565 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.551374 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9395104-b579-44d5-bbf0-69fe4d17406d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.551421 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.551469 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.551495 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.551532 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js7f6\" (UniqueName: \"kubernetes.io/projected/e9395104-b579-44d5-bbf0-69fe4d17406d-kube-api-access-js7f6\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.551595 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.552384 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9395104-b579-44d5-bbf0-69fe4d17406d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.556489 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.556727 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.556975 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.558198 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.570521 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js7f6\" (UniqueName: \"kubernetes.io/projected/e9395104-b579-44d5-bbf0-69fe4d17406d-kube-api-access-js7f6\") pod \"cinder-scheduler-0\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.609993 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.618748 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.619888 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.621583 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-h5pj8" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.624015 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.625750 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.628333 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.755459 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.756028 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config-secret\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.756136 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpvh8\" (UniqueName: \"kubernetes.io/projected/65736cb4-25b2-402e-8dfe-d00b218a274b-kube-api-access-rpvh8\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.756165 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.857993 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpvh8\" (UniqueName: \"kubernetes.io/projected/65736cb4-25b2-402e-8dfe-d00b218a274b-kube-api-access-rpvh8\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.858035 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.858079 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.858168 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config-secret\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.859232 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.862654 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.874638 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config-secret\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:55 crc kubenswrapper[4780]: I1205 07:07:55.878497 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpvh8\" (UniqueName: \"kubernetes.io/projected/65736cb4-25b2-402e-8dfe-d00b218a274b-kube-api-access-rpvh8\") pod \"openstackclient\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " pod="openstack/openstackclient" Dec 05 07:07:56 crc kubenswrapper[4780]: I1205 07:07:56.063356 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 07:07:56 crc kubenswrapper[4780]: W1205 07:07:56.095957 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9395104_b579_44d5_bbf0_69fe4d17406d.slice/crio-00dea51366f0025d7b786ce52a6c3792f2fea0584a0637161391e0da7fe4d28f WatchSource:0}: Error finding container 00dea51366f0025d7b786ce52a6c3792f2fea0584a0637161391e0da7fe4d28f: Status 404 returned error can't find the container with id 00dea51366f0025d7b786ce52a6c3792f2fea0584a0637161391e0da7fe4d28f Dec 05 07:07:56 crc kubenswrapper[4780]: I1205 07:07:56.102562 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:07:56 crc kubenswrapper[4780]: I1205 07:07:56.178998 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="197d9af7-8b17-416e-bdc8-4c19a0bb331e" path="/var/lib/kubelet/pods/197d9af7-8b17-416e-bdc8-4c19a0bb331e/volumes" Dec 05 07:07:56 crc kubenswrapper[4780]: I1205 07:07:56.545361 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 07:07:56 crc kubenswrapper[4780]: W1205 07:07:56.559692 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65736cb4_25b2_402e_8dfe_d00b218a274b.slice/crio-4c4cb229796f92790d51d363522a4fcf460195da71e33a4d358f6e7dda585aab WatchSource:0}: Error finding container 4c4cb229796f92790d51d363522a4fcf460195da71e33a4d358f6e7dda585aab: Status 404 returned error can't find the container with id 4c4cb229796f92790d51d363522a4fcf460195da71e33a4d358f6e7dda585aab Dec 05 07:07:56 crc kubenswrapper[4780]: I1205 07:07:56.970920 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"65736cb4-25b2-402e-8dfe-d00b218a274b","Type":"ContainerStarted","Data":"4c4cb229796f92790d51d363522a4fcf460195da71e33a4d358f6e7dda585aab"} Dec 05 07:07:56 crc kubenswrapper[4780]: I1205 07:07:56.973682 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9395104-b579-44d5-bbf0-69fe4d17406d","Type":"ContainerStarted","Data":"bbf7ba30828f7305d2c91dd07104e5ee99cdcba79c89362c856ebc2c639710e1"} Dec 05 07:07:56 crc kubenswrapper[4780]: I1205 07:07:56.973756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9395104-b579-44d5-bbf0-69fe4d17406d","Type":"ContainerStarted","Data":"00dea51366f0025d7b786ce52a6c3792f2fea0584a0637161391e0da7fe4d28f"} Dec 05 07:07:57 crc kubenswrapper[4780]: I1205 07:07:57.987529 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9395104-b579-44d5-bbf0-69fe4d17406d","Type":"ContainerStarted","Data":"b7d4a3dac21d90122fe88d5308d7939f24f6b2475dc30bbbf81f06bd4930e1a3"} Dec 05 07:07:58 crc kubenswrapper[4780]: I1205 07:07:58.014845 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.014824884 podStartE2EDuration="3.014824884s" podCreationTimestamp="2025-12-05 07:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:07:58.008850502 +0000 UTC m=+1312.078366834" watchObservedRunningTime="2025-12-05 07:07:58.014824884 +0000 UTC m=+1312.084341216" Dec 05 07:07:58 crc kubenswrapper[4780]: I1205 07:07:58.601643 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 05 07:07:59 crc kubenswrapper[4780]: I1205 07:07:59.924845 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-58fb69b8bc-qmkp5"] Dec 05 07:07:59 crc kubenswrapper[4780]: I1205 07:07:59.926702 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:07:59 crc kubenswrapper[4780]: I1205 07:07:59.932839 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58fb69b8bc-qmkp5"] Dec 05 07:07:59 crc kubenswrapper[4780]: I1205 07:07:59.966305 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 07:07:59 crc kubenswrapper[4780]: I1205 07:07:59.966537 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 05 07:07:59 crc kubenswrapper[4780]: I1205 07:07:59.969126 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.066963 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-internal-tls-certs\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.067332 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-run-httpd\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.067389 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-log-httpd\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.067430 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwqj2\" (UniqueName: \"kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-kube-api-access-zwqj2\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.067489 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-etc-swift\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.067508 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-combined-ca-bundle\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.067526 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-public-tls-certs\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.067556 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-config-data\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.169652 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-etc-swift\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.169701 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-combined-ca-bundle\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.169729 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-public-tls-certs\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.169769 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-config-data\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.169847 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-internal-tls-certs\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.169900 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-run-httpd\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.169957 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-log-httpd\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.170033 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwqj2\" (UniqueName: \"kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-kube-api-access-zwqj2\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.170584 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-run-httpd\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.170617 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-log-httpd\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.178556 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-public-tls-certs\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.178871 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-etc-swift\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.179725 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-internal-tls-certs\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.180147 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-combined-ca-bundle\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.185644 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-config-data\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.198971 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwqj2\" (UniqueName: \"kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-kube-api-access-zwqj2\") pod \"swift-proxy-58fb69b8bc-qmkp5\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.296299 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.506479 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.507199 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="ceilometer-central-agent" containerID="cri-o://c4e630fd33071029b5f5426cfb67b1d809e1ba571bbe5ec446fcb69439bc8db0" gracePeriod=30 Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.507923 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="proxy-httpd" containerID="cri-o://13da505ed3a2caa0fefbf684b14d22b8421b52b40329cf917b008fd0be6b7229" gracePeriod=30 Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.507979 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="sg-core" containerID="cri-o://826efb8fd7f678263624359df77ca36e1fc394deed96889a21c75cb0214e446e" gracePeriod=30 Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.508012 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="ceilometer-notification-agent" containerID="cri-o://04a88cbf99864ec61d56540fc895f889dc497aea74b46503ef4f4d4291789132" gracePeriod=30 Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.523239 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": EOF" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.626831 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 07:08:00 crc kubenswrapper[4780]: I1205 07:08:00.915389 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58fb69b8bc-qmkp5"] Dec 05 07:08:00 crc kubenswrapper[4780]: W1205 07:08:00.924449 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fd70346_51cf_44fc_8cea_48ee35deadb0.slice/crio-35cf566470a091b9caef81f95df53c04f9fc375d2635c73c36f6ba1a60133602 WatchSource:0}: Error finding container 35cf566470a091b9caef81f95df53c04f9fc375d2635c73c36f6ba1a60133602: Status 404 returned error can't find the container with id 35cf566470a091b9caef81f95df53c04f9fc375d2635c73c36f6ba1a60133602 Dec 05 07:08:01 crc kubenswrapper[4780]: I1205 07:08:01.018243 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" event={"ID":"5fd70346-51cf-44fc-8cea-48ee35deadb0","Type":"ContainerStarted","Data":"35cf566470a091b9caef81f95df53c04f9fc375d2635c73c36f6ba1a60133602"} Dec 05 07:08:01 crc kubenswrapper[4780]: I1205 07:08:01.023028 4780 generic.go:334] "Generic (PLEG): container finished" podID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerID="13da505ed3a2caa0fefbf684b14d22b8421b52b40329cf917b008fd0be6b7229" exitCode=0 Dec 05 07:08:01 crc kubenswrapper[4780]: I1205 07:08:01.023054 4780 generic.go:334] "Generic (PLEG): container finished" podID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerID="826efb8fd7f678263624359df77ca36e1fc394deed96889a21c75cb0214e446e" exitCode=2 Dec 05 07:08:01 crc kubenswrapper[4780]: I1205 07:08:01.023069 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1","Type":"ContainerDied","Data":"13da505ed3a2caa0fefbf684b14d22b8421b52b40329cf917b008fd0be6b7229"} Dec 05 07:08:01 crc kubenswrapper[4780]: I1205 07:08:01.023092 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1","Type":"ContainerDied","Data":"826efb8fd7f678263624359df77ca36e1fc394deed96889a21c75cb0214e446e"} Dec 05 07:08:02 crc kubenswrapper[4780]: I1205 07:08:02.036440 4780 generic.go:334] "Generic (PLEG): container finished" podID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerID="c4e630fd33071029b5f5426cfb67b1d809e1ba571bbe5ec446fcb69439bc8db0" exitCode=0 Dec 05 07:08:02 crc kubenswrapper[4780]: I1205 07:08:02.036608 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1","Type":"ContainerDied","Data":"c4e630fd33071029b5f5426cfb67b1d809e1ba571bbe5ec446fcb69439bc8db0"} Dec 05 07:08:02 crc kubenswrapper[4780]: I1205 07:08:02.041545 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" event={"ID":"5fd70346-51cf-44fc-8cea-48ee35deadb0","Type":"ContainerStarted","Data":"71729bfa39e43aff8b7d4b4b743bda8c1770fd26a85d5b0e6a9fae0194a3feb3"} Dec 05 07:08:02 crc kubenswrapper[4780]: I1205 07:08:02.041584 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" event={"ID":"5fd70346-51cf-44fc-8cea-48ee35deadb0","Type":"ContainerStarted","Data":"65ffda40cf80de35ae936a6d650ef297fd76dee5044c69bbd6c7b2b5e327da96"} Dec 05 07:08:02 crc kubenswrapper[4780]: I1205 07:08:02.042105 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:02 crc kubenswrapper[4780]: I1205 07:08:02.042194 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:02 crc kubenswrapper[4780]: I1205 07:08:02.063684 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" podStartSLOduration=3.063669444 podStartE2EDuration="3.063669444s" podCreationTimestamp="2025-12-05 07:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:02.062598066 +0000 UTC m=+1316.132114398" watchObservedRunningTime="2025-12-05 07:08:02.063669444 +0000 UTC m=+1316.133185776" Dec 05 07:08:02 crc kubenswrapper[4780]: I1205 07:08:02.516772 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:08:03 crc kubenswrapper[4780]: I1205 07:08:03.702151 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:08:03 crc kubenswrapper[4780]: I1205 07:08:03.752773 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:08:04 crc kubenswrapper[4780]: I1205 07:08:04.065603 4780 generic.go:334] "Generic (PLEG): container finished" podID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerID="04a88cbf99864ec61d56540fc895f889dc497aea74b46503ef4f4d4291789132" exitCode=0 Dec 05 07:08:04 crc kubenswrapper[4780]: I1205 07:08:04.065694 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1","Type":"ContainerDied","Data":"04a88cbf99864ec61d56540fc895f889dc497aea74b46503ef4f4d4291789132"} Dec 05 07:08:05 crc kubenswrapper[4780]: I1205 07:08:05.860601 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:08:05 crc kubenswrapper[4780]: I1205 07:08:05.930258 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c469598fb-5vvx6"] Dec 05 07:08:05 crc kubenswrapper[4780]: I1205 07:08:05.930481 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c469598fb-5vvx6" podUID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" containerName="neutron-api" containerID="cri-o://209ef2daa7285ce41358b37abc553d7c949e87a567a4f404d81a57a426b0af45" gracePeriod=30 Dec 05 07:08:05 crc kubenswrapper[4780]: I1205 07:08:05.930961 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c469598fb-5vvx6" podUID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" containerName="neutron-httpd" containerID="cri-o://d6143d4b965cf16bd3eb3b2d8846c785f1dfb5782c3049ff122d6f9bc135d91f" gracePeriod=30 Dec 05 07:08:05 crc kubenswrapper[4780]: I1205 07:08:05.986923 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.111345 4780 generic.go:334] "Generic (PLEG): container finished" podID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" containerID="d6143d4b965cf16bd3eb3b2d8846c785f1dfb5782c3049ff122d6f9bc135d91f" exitCode=0 Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.111721 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c469598fb-5vvx6" event={"ID":"32bd2816-4963-4acd-b1c0-3629dd1c2c3a","Type":"ContainerDied","Data":"d6143d4b965cf16bd3eb3b2d8846c785f1dfb5782c3049ff122d6f9bc135d91f"} Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.196940 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9xcxs"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.198252 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xcxs" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.206930 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9xcxs"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.300230 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-bwb2m"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.301645 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bwb2m" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.316179 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ebdd-account-create-update-5f9cl"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.335263 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ebdd-account-create-update-5f9cl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.338768 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bwb2m"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.340249 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.348174 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a53d09-12f2-4488-814f-47114ab22120-operator-scripts\") pod \"nova-api-db-create-9xcxs\" (UID: \"89a53d09-12f2-4488-814f-47114ab22120\") " pod="openstack/nova-api-db-create-9xcxs" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.348291 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjn5x\" (UniqueName: \"kubernetes.io/projected/89a53d09-12f2-4488-814f-47114ab22120-kube-api-access-gjn5x\") pod \"nova-api-db-create-9xcxs\" (UID: \"89a53d09-12f2-4488-814f-47114ab22120\") " pod="openstack/nova-api-db-create-9xcxs" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.376295 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ebdd-account-create-update-5f9cl"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.413762 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-m5dbk"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.415543 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5dbk" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.433122 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m5dbk"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.449379 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqvp\" (UniqueName: \"kubernetes.io/projected/f06e0616-87ca-48d5-9738-e92e1edb2ac5-kube-api-access-6rqvp\") pod \"nova-api-ebdd-account-create-update-5f9cl\" (UID: \"f06e0616-87ca-48d5-9738-e92e1edb2ac5\") " pod="openstack/nova-api-ebdd-account-create-update-5f9cl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.449439 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb26cab-c196-4a45-8ba3-2d9066683eaa-operator-scripts\") pod \"nova-cell0-db-create-bwb2m\" (UID: \"0fb26cab-c196-4a45-8ba3-2d9066683eaa\") " pod="openstack/nova-cell0-db-create-bwb2m" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.449635 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7j6\" (UniqueName: \"kubernetes.io/projected/0fb26cab-c196-4a45-8ba3-2d9066683eaa-kube-api-access-dp7j6\") pod \"nova-cell0-db-create-bwb2m\" (UID: \"0fb26cab-c196-4a45-8ba3-2d9066683eaa\") " pod="openstack/nova-cell0-db-create-bwb2m" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.449806 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a53d09-12f2-4488-814f-47114ab22120-operator-scripts\") pod \"nova-api-db-create-9xcxs\" (UID: \"89a53d09-12f2-4488-814f-47114ab22120\") " pod="openstack/nova-api-db-create-9xcxs" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.449853 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06e0616-87ca-48d5-9738-e92e1edb2ac5-operator-scripts\") pod \"nova-api-ebdd-account-create-update-5f9cl\" (UID: \"f06e0616-87ca-48d5-9738-e92e1edb2ac5\") " pod="openstack/nova-api-ebdd-account-create-update-5f9cl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.449960 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjn5x\" (UniqueName: \"kubernetes.io/projected/89a53d09-12f2-4488-814f-47114ab22120-kube-api-access-gjn5x\") pod \"nova-api-db-create-9xcxs\" (UID: \"89a53d09-12f2-4488-814f-47114ab22120\") " pod="openstack/nova-api-db-create-9xcxs" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.450747 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a53d09-12f2-4488-814f-47114ab22120-operator-scripts\") pod \"nova-api-db-create-9xcxs\" (UID: \"89a53d09-12f2-4488-814f-47114ab22120\") " pod="openstack/nova-api-db-create-9xcxs" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.476536 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjn5x\" (UniqueName: \"kubernetes.io/projected/89a53d09-12f2-4488-814f-47114ab22120-kube-api-access-gjn5x\") pod \"nova-api-db-create-9xcxs\" (UID: \"89a53d09-12f2-4488-814f-47114ab22120\") " pod="openstack/nova-api-db-create-9xcxs" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.515310 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xcxs" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.519750 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-79a0-account-create-update-tm7nl"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.521431 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.524117 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.532460 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-79a0-account-create-update-tm7nl"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.551491 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rqvp\" (UniqueName: \"kubernetes.io/projected/f06e0616-87ca-48d5-9738-e92e1edb2ac5-kube-api-access-6rqvp\") pod \"nova-api-ebdd-account-create-update-5f9cl\" (UID: \"f06e0616-87ca-48d5-9738-e92e1edb2ac5\") " pod="openstack/nova-api-ebdd-account-create-update-5f9cl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.551542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb26cab-c196-4a45-8ba3-2d9066683eaa-operator-scripts\") pod \"nova-cell0-db-create-bwb2m\" (UID: \"0fb26cab-c196-4a45-8ba3-2d9066683eaa\") " pod="openstack/nova-cell0-db-create-bwb2m" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.551579 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp7j6\" (UniqueName: \"kubernetes.io/projected/0fb26cab-c196-4a45-8ba3-2d9066683eaa-kube-api-access-dp7j6\") pod \"nova-cell0-db-create-bwb2m\" (UID: \"0fb26cab-c196-4a45-8ba3-2d9066683eaa\") " pod="openstack/nova-cell0-db-create-bwb2m" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.551649 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp2ts\" (UniqueName: \"kubernetes.io/projected/87645633-adc7-4611-ac03-0bd01623a44e-kube-api-access-sp2ts\") pod \"nova-cell1-db-create-m5dbk\" (UID: \"87645633-adc7-4611-ac03-0bd01623a44e\") " pod="openstack/nova-cell1-db-create-m5dbk" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.551672 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06e0616-87ca-48d5-9738-e92e1edb2ac5-operator-scripts\") pod \"nova-api-ebdd-account-create-update-5f9cl\" (UID: \"f06e0616-87ca-48d5-9738-e92e1edb2ac5\") " pod="openstack/nova-api-ebdd-account-create-update-5f9cl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.551691 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87645633-adc7-4611-ac03-0bd01623a44e-operator-scripts\") pod \"nova-cell1-db-create-m5dbk\" (UID: \"87645633-adc7-4611-ac03-0bd01623a44e\") " pod="openstack/nova-cell1-db-create-m5dbk" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.552643 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb26cab-c196-4a45-8ba3-2d9066683eaa-operator-scripts\") pod \"nova-cell0-db-create-bwb2m\" (UID: \"0fb26cab-c196-4a45-8ba3-2d9066683eaa\") " pod="openstack/nova-cell0-db-create-bwb2m" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.553298 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06e0616-87ca-48d5-9738-e92e1edb2ac5-operator-scripts\") pod \"nova-api-ebdd-account-create-update-5f9cl\" (UID: \"f06e0616-87ca-48d5-9738-e92e1edb2ac5\") " pod="openstack/nova-api-ebdd-account-create-update-5f9cl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.570106 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rqvp\" (UniqueName: \"kubernetes.io/projected/f06e0616-87ca-48d5-9738-e92e1edb2ac5-kube-api-access-6rqvp\") pod \"nova-api-ebdd-account-create-update-5f9cl\" (UID: \"f06e0616-87ca-48d5-9738-e92e1edb2ac5\") " pod="openstack/nova-api-ebdd-account-create-update-5f9cl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.589493 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp7j6\" (UniqueName: \"kubernetes.io/projected/0fb26cab-c196-4a45-8ba3-2d9066683eaa-kube-api-access-dp7j6\") pod \"nova-cell0-db-create-bwb2m\" (UID: \"0fb26cab-c196-4a45-8ba3-2d9066683eaa\") " pod="openstack/nova-cell0-db-create-bwb2m" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.629133 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bwb2m" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.653016 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvjv\" (UniqueName: \"kubernetes.io/projected/acb4dcef-f976-4800-9e85-c59617b30727-kube-api-access-chvjv\") pod \"nova-cell0-79a0-account-create-update-tm7nl\" (UID: \"acb4dcef-f976-4800-9e85-c59617b30727\") " pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.653149 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp2ts\" (UniqueName: \"kubernetes.io/projected/87645633-adc7-4611-ac03-0bd01623a44e-kube-api-access-sp2ts\") pod \"nova-cell1-db-create-m5dbk\" (UID: \"87645633-adc7-4611-ac03-0bd01623a44e\") " pod="openstack/nova-cell1-db-create-m5dbk" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.653186 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb4dcef-f976-4800-9e85-c59617b30727-operator-scripts\") pod \"nova-cell0-79a0-account-create-update-tm7nl\" (UID: \"acb4dcef-f976-4800-9e85-c59617b30727\") " pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.653210 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87645633-adc7-4611-ac03-0bd01623a44e-operator-scripts\") pod \"nova-cell1-db-create-m5dbk\" (UID: \"87645633-adc7-4611-ac03-0bd01623a44e\") " pod="openstack/nova-cell1-db-create-m5dbk" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.657105 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87645633-adc7-4611-ac03-0bd01623a44e-operator-scripts\") pod \"nova-cell1-db-create-m5dbk\" (UID: \"87645633-adc7-4611-ac03-0bd01623a44e\") " pod="openstack/nova-cell1-db-create-m5dbk" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.671485 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ebdd-account-create-update-5f9cl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.677542 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp2ts\" (UniqueName: \"kubernetes.io/projected/87645633-adc7-4611-ac03-0bd01623a44e-kube-api-access-sp2ts\") pod \"nova-cell1-db-create-m5dbk\" (UID: \"87645633-adc7-4611-ac03-0bd01623a44e\") " pod="openstack/nova-cell1-db-create-m5dbk" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.704146 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2782-account-create-update-z246t"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.705785 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2782-account-create-update-z246t" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.708063 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.723158 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2782-account-create-update-z246t"] Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.749013 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5dbk" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.755238 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chvjv\" (UniqueName: \"kubernetes.io/projected/acb4dcef-f976-4800-9e85-c59617b30727-kube-api-access-chvjv\") pod \"nova-cell0-79a0-account-create-update-tm7nl\" (UID: \"acb4dcef-f976-4800-9e85-c59617b30727\") " pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.755335 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb4dcef-f976-4800-9e85-c59617b30727-operator-scripts\") pod \"nova-cell0-79a0-account-create-update-tm7nl\" (UID: \"acb4dcef-f976-4800-9e85-c59617b30727\") " pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.756109 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb4dcef-f976-4800-9e85-c59617b30727-operator-scripts\") pod \"nova-cell0-79a0-account-create-update-tm7nl\" (UID: \"acb4dcef-f976-4800-9e85-c59617b30727\") " pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.776405 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chvjv\" (UniqueName: \"kubernetes.io/projected/acb4dcef-f976-4800-9e85-c59617b30727-kube-api-access-chvjv\") pod \"nova-cell0-79a0-account-create-update-tm7nl\" (UID: \"acb4dcef-f976-4800-9e85-c59617b30727\") " pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.841046 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.861915 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2l8j\" (UniqueName: \"kubernetes.io/projected/0d3dd0d5-7b46-4ad7-b31d-784587823a79-kube-api-access-t2l8j\") pod \"nova-cell1-2782-account-create-update-z246t\" (UID: \"0d3dd0d5-7b46-4ad7-b31d-784587823a79\") " pod="openstack/nova-cell1-2782-account-create-update-z246t" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.862066 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3dd0d5-7b46-4ad7-b31d-784587823a79-operator-scripts\") pod \"nova-cell1-2782-account-create-update-z246t\" (UID: \"0d3dd0d5-7b46-4ad7-b31d-784587823a79\") " pod="openstack/nova-cell1-2782-account-create-update-z246t" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.894980 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": dial tcp 10.217.0.160:3000: connect: connection refused" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.964222 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3dd0d5-7b46-4ad7-b31d-784587823a79-operator-scripts\") pod \"nova-cell1-2782-account-create-update-z246t\" (UID: \"0d3dd0d5-7b46-4ad7-b31d-784587823a79\") " pod="openstack/nova-cell1-2782-account-create-update-z246t" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.964324 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2l8j\" (UniqueName: \"kubernetes.io/projected/0d3dd0d5-7b46-4ad7-b31d-784587823a79-kube-api-access-t2l8j\") pod \"nova-cell1-2782-account-create-update-z246t\" (UID: \"0d3dd0d5-7b46-4ad7-b31d-784587823a79\") " pod="openstack/nova-cell1-2782-account-create-update-z246t" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.978349 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3dd0d5-7b46-4ad7-b31d-784587823a79-operator-scripts\") pod \"nova-cell1-2782-account-create-update-z246t\" (UID: \"0d3dd0d5-7b46-4ad7-b31d-784587823a79\") " pod="openstack/nova-cell1-2782-account-create-update-z246t" Dec 05 07:08:07 crc kubenswrapper[4780]: I1205 07:08:07.989594 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2l8j\" (UniqueName: \"kubernetes.io/projected/0d3dd0d5-7b46-4ad7-b31d-784587823a79-kube-api-access-t2l8j\") pod \"nova-cell1-2782-account-create-update-z246t\" (UID: \"0d3dd0d5-7b46-4ad7-b31d-784587823a79\") " pod="openstack/nova-cell1-2782-account-create-update-z246t" Dec 05 07:08:08 crc kubenswrapper[4780]: I1205 07:08:08.034317 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2782-account-create-update-z246t" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.190152 4780 generic.go:334] "Generic (PLEG): container finished" podID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" containerID="209ef2daa7285ce41358b37abc553d7c949e87a567a4f404d81a57a426b0af45" exitCode=0 Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.190986 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c469598fb-5vvx6" event={"ID":"32bd2816-4963-4acd-b1c0-3629dd1c2c3a","Type":"ContainerDied","Data":"209ef2daa7285ce41358b37abc553d7c949e87a567a4f404d81a57a426b0af45"} Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.260490 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.315251 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.373330 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.413276 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95nbn\" (UniqueName: \"kubernetes.io/projected/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-kube-api-access-95nbn\") pod \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.413375 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-sg-core-conf-yaml\") pod \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.413421 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-run-httpd\") pod \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.413488 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-combined-ca-bundle\") pod \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.413557 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-scripts\") pod \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.413589 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-config-data\") pod \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.413642 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-log-httpd\") pod \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\" (UID: \"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.418332 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" (UID: "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.428959 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-kube-api-access-95nbn" (OuterVolumeSpecName: "kube-api-access-95nbn") pod "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" (UID: "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1"). InnerVolumeSpecName "kube-api-access-95nbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.431770 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" (UID: "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.432252 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-scripts" (OuterVolumeSpecName: "scripts") pod "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" (UID: "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.451789 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.509313 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" (UID: "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.560644 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95nbn\" (UniqueName: \"kubernetes.io/projected/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-kube-api-access-95nbn\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.561009 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.563719 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.563776 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.592726 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bwb2m"] Dec 05 07:08:10 crc kubenswrapper[4780]: W1205 07:08:10.605919 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fb26cab_c196_4a45_8ba3_2d9066683eaa.slice/crio-9f75fd63fd74eec15d81b42a3ce119f244eaace0b34a640d03d5c0f32e8e4127 WatchSource:0}: Error finding container 9f75fd63fd74eec15d81b42a3ce119f244eaace0b34a640d03d5c0f32e8e4127: Status 404 returned error can't find the container with id 9f75fd63fd74eec15d81b42a3ce119f244eaace0b34a640d03d5c0f32e8e4127 Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.649977 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" (UID: "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.652077 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-config-data" (OuterVolumeSpecName: "config-data") pod "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" (UID: "2511678c-ce8a-49cb-8ec6-aa2d0717a3d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.667061 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.667133 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.678972 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.870446 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-config\") pod \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.870540 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqpzq\" (UniqueName: \"kubernetes.io/projected/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-kube-api-access-hqpzq\") pod \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.870676 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-ovndb-tls-certs\") pod \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.870788 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-httpd-config\") pod \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.870853 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-combined-ca-bundle\") pod \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\" (UID: \"32bd2816-4963-4acd-b1c0-3629dd1c2c3a\") " Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.881138 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "32bd2816-4963-4acd-b1c0-3629dd1c2c3a" (UID: "32bd2816-4963-4acd-b1c0-3629dd1c2c3a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.882089 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-kube-api-access-hqpzq" (OuterVolumeSpecName: "kube-api-access-hqpzq") pod "32bd2816-4963-4acd-b1c0-3629dd1c2c3a" (UID: "32bd2816-4963-4acd-b1c0-3629dd1c2c3a"). InnerVolumeSpecName "kube-api-access-hqpzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.944130 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32bd2816-4963-4acd-b1c0-3629dd1c2c3a" (UID: "32bd2816-4963-4acd-b1c0-3629dd1c2c3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.968142 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "32bd2816-4963-4acd-b1c0-3629dd1c2c3a" (UID: "32bd2816-4963-4acd-b1c0-3629dd1c2c3a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.982083 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqpzq\" (UniqueName: \"kubernetes.io/projected/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-kube-api-access-hqpzq\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.982113 4780 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.982124 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:10 crc kubenswrapper[4780]: I1205 07:08:10.982134 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.025483 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-config" (OuterVolumeSpecName: "config") pod "32bd2816-4963-4acd-b1c0-3629dd1c2c3a" (UID: "32bd2816-4963-4acd-b1c0-3629dd1c2c3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.066690 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9xcxs"] Dec 05 07:08:11 crc kubenswrapper[4780]: W1205 07:08:11.069298 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d3dd0d5_7b46_4ad7_b31d_784587823a79.slice/crio-248226eeddf31af6f5711bd64b541c18f171a8d7f1c6419c08ce8caab1a44f8b WatchSource:0}: Error finding container 248226eeddf31af6f5711bd64b541c18f171a8d7f1c6419c08ce8caab1a44f8b: Status 404 returned error can't find the container with id 248226eeddf31af6f5711bd64b541c18f171a8d7f1c6419c08ce8caab1a44f8b Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.074615 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2782-account-create-update-z246t"] Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.084356 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/32bd2816-4963-4acd-b1c0-3629dd1c2c3a-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:11 crc kubenswrapper[4780]: W1205 07:08:11.091203 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf06e0616_87ca_48d5_9738_e92e1edb2ac5.slice/crio-cfbaa7b869f16f7ff592e7738561cc995068f7928278c06dbcafe4a302772205 WatchSource:0}: Error finding container cfbaa7b869f16f7ff592e7738561cc995068f7928278c06dbcafe4a302772205: Status 404 returned error can't find the container with id cfbaa7b869f16f7ff592e7738561cc995068f7928278c06dbcafe4a302772205 Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.096420 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ebdd-account-create-update-5f9cl"] Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.111580 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m5dbk"] Dec 05 07:08:11 crc kubenswrapper[4780]: W1205 07:08:11.112003 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87645633_adc7_4611_ac03_0bd01623a44e.slice/crio-0a74c24df2343004dff6a0a8aa0ea55b73d81e5439e67efcbac94d5718af5cf4 WatchSource:0}: Error finding container 0a74c24df2343004dff6a0a8aa0ea55b73d81e5439e67efcbac94d5718af5cf4: Status 404 returned error can't find the container with id 0a74c24df2343004dff6a0a8aa0ea55b73d81e5439e67efcbac94d5718af5cf4 Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.123136 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-79a0-account-create-update-tm7nl"] Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.206682 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ebdd-account-create-update-5f9cl" event={"ID":"f06e0616-87ca-48d5-9738-e92e1edb2ac5","Type":"ContainerStarted","Data":"cfbaa7b869f16f7ff592e7738561cc995068f7928278c06dbcafe4a302772205"} Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.208772 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" event={"ID":"acb4dcef-f976-4800-9e85-c59617b30727","Type":"ContainerStarted","Data":"384bf64ad68783419e746c25e80415946897f6d8a8d8c389d04a4200a8f05739"} Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.209937 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5dbk" event={"ID":"87645633-adc7-4611-ac03-0bd01623a44e","Type":"ContainerStarted","Data":"0a74c24df2343004dff6a0a8aa0ea55b73d81e5439e67efcbac94d5718af5cf4"} Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.211373 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"65736cb4-25b2-402e-8dfe-d00b218a274b","Type":"ContainerStarted","Data":"b9808fa835d43d815f703095686bccb9a6eedb6aab78ee5755aeddb342d50d7a"} Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.216611 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c469598fb-5vvx6" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.216775 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c469598fb-5vvx6" event={"ID":"32bd2816-4963-4acd-b1c0-3629dd1c2c3a","Type":"ContainerDied","Data":"e5a596b6015a96f1ad86b419c82272790aa737a6a43eb72ce0211fd404193a1f"} Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.216818 4780 scope.go:117] "RemoveContainer" containerID="d6143d4b965cf16bd3eb3b2d8846c785f1dfb5782c3049ff122d6f9bc135d91f" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.220544 4780 generic.go:334] "Generic (PLEG): container finished" podID="0fb26cab-c196-4a45-8ba3-2d9066683eaa" containerID="2d699037508ccbdd7d61092346751742b0b0f36d49068e0aec3a8c41238695cd" exitCode=0 Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.220673 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bwb2m" event={"ID":"0fb26cab-c196-4a45-8ba3-2d9066683eaa","Type":"ContainerDied","Data":"2d699037508ccbdd7d61092346751742b0b0f36d49068e0aec3a8c41238695cd"} Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.220696 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bwb2m" event={"ID":"0fb26cab-c196-4a45-8ba3-2d9066683eaa","Type":"ContainerStarted","Data":"9f75fd63fd74eec15d81b42a3ce119f244eaace0b34a640d03d5c0f32e8e4127"} Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.228136 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2782-account-create-update-z246t" event={"ID":"0d3dd0d5-7b46-4ad7-b31d-784587823a79","Type":"ContainerStarted","Data":"248226eeddf31af6f5711bd64b541c18f171a8d7f1c6419c08ce8caab1a44f8b"} Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.228707 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.725448408 podStartE2EDuration="16.228691431s" podCreationTimestamp="2025-12-05 07:07:55 +0000 UTC" firstStartedPulling="2025-12-05 07:07:56.562932645 +0000 UTC m=+1310.632448977" lastFinishedPulling="2025-12-05 07:08:10.066175668 +0000 UTC m=+1324.135692000" observedRunningTime="2025-12-05 07:08:11.225329121 +0000 UTC m=+1325.294845453" watchObservedRunningTime="2025-12-05 07:08:11.228691431 +0000 UTC m=+1325.298207753" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.234109 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xcxs" event={"ID":"89a53d09-12f2-4488-814f-47114ab22120","Type":"ContainerStarted","Data":"045869724ba832f81e0700659ed9149715faefd7277403a4e05b2098782cc46c"} Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.250099 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2511678c-ce8a-49cb-8ec6-aa2d0717a3d1","Type":"ContainerDied","Data":"50130c6975b2542f762feabc3ad5837cc515bd9042b0a4f0fe9419040f4f78a3"} Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.250251 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.272462 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c469598fb-5vvx6"] Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.284273 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c469598fb-5vvx6"] Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.298042 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.313619 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.325587 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:11 crc kubenswrapper[4780]: E1205 07:08:11.326037 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="sg-core" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326057 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="sg-core" Dec 05 07:08:11 crc kubenswrapper[4780]: E1205 07:08:11.326079 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" containerName="neutron-httpd" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326087 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" containerName="neutron-httpd" Dec 05 07:08:11 crc kubenswrapper[4780]: E1205 07:08:11.326102 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="ceilometer-notification-agent" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326110 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="ceilometer-notification-agent" Dec 05 07:08:11 crc kubenswrapper[4780]: E1205 07:08:11.326122 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" containerName="neutron-api" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326128 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" containerName="neutron-api" Dec 05 07:08:11 crc kubenswrapper[4780]: E1205 07:08:11.326149 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="ceilometer-central-agent" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326155 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="ceilometer-central-agent" Dec 05 07:08:11 crc kubenswrapper[4780]: E1205 07:08:11.326166 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="proxy-httpd" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326172 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="proxy-httpd" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326342 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="ceilometer-notification-agent" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326353 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" containerName="neutron-api" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326360 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="sg-core" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326370 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="ceilometer-central-agent" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326383 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" containerName="proxy-httpd" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.326393 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" containerName="neutron-httpd" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.328151 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.335412 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.335505 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.340518 4780 scope.go:117] "RemoveContainer" containerID="209ef2daa7285ce41358b37abc553d7c949e87a567a4f404d81a57a426b0af45" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.356671 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.401802 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-log-httpd\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.401928 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-run-httpd\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.401970 4780 scope.go:117] "RemoveContainer" containerID="13da505ed3a2caa0fefbf684b14d22b8421b52b40329cf917b008fd0be6b7229" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.401985 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.402146 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.402189 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-scripts\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.402223 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zsxf\" (UniqueName: \"kubernetes.io/projected/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-kube-api-access-7zsxf\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.402241 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-config-data\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.446846 4780 scope.go:117] "RemoveContainer" containerID="826efb8fd7f678263624359df77ca36e1fc394deed96889a21c75cb0214e446e" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.480163 4780 scope.go:117] "RemoveContainer" containerID="04a88cbf99864ec61d56540fc895f889dc497aea74b46503ef4f4d4291789132" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.504478 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zsxf\" (UniqueName: \"kubernetes.io/projected/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-kube-api-access-7zsxf\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.504532 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-config-data\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.505036 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-log-httpd\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.505139 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-run-httpd\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.505255 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.505278 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.505315 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-scripts\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.505492 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-log-httpd\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.505700 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-run-httpd\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.506759 4780 scope.go:117] "RemoveContainer" containerID="c4e630fd33071029b5f5426cfb67b1d809e1ba571bbe5ec446fcb69439bc8db0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.510227 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.511946 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.512106 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-scripts\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.513021 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-config-data\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.521057 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zsxf\" (UniqueName: \"kubernetes.io/projected/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-kube-api-access-7zsxf\") pod \"ceilometer-0\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " pod="openstack/ceilometer-0" Dec 05 07:08:11 crc kubenswrapper[4780]: I1205 07:08:11.707786 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.171221 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2511678c-ce8a-49cb-8ec6-aa2d0717a3d1" path="/var/lib/kubelet/pods/2511678c-ce8a-49cb-8ec6-aa2d0717a3d1/volumes" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.173425 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32bd2816-4963-4acd-b1c0-3629dd1c2c3a" path="/var/lib/kubelet/pods/32bd2816-4963-4acd-b1c0-3629dd1c2c3a/volumes" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.177479 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.297283 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" event={"ID":"acb4dcef-f976-4800-9e85-c59617b30727","Type":"ContainerStarted","Data":"8fffeffc9dd667b68b1d3d2ab1c350a63ba46bb0fae6eb9e60e37596aec7de8e"} Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.303258 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2782-account-create-update-z246t" event={"ID":"0d3dd0d5-7b46-4ad7-b31d-784587823a79","Type":"ContainerStarted","Data":"d200411146961a40d5c61aaae124b891efa425fbc8abe655520e6b1f080b9824"} Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.317019 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xcxs" event={"ID":"89a53d09-12f2-4488-814f-47114ab22120","Type":"ContainerStarted","Data":"4e55d512af9e461881a0ca014139b9f0bc12396a8a4f07b2d40fc4f1f7ca7d8d"} Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.326187 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3c0fdbf-6f18-4e15-aa04-d34623bd6453","Type":"ContainerStarted","Data":"987c1b676e01e297b68e25a27148106f0ebc743f60f4bfbd3478e29bf64278c0"} Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.326625 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" podStartSLOduration=5.326612463 podStartE2EDuration="5.326612463s" podCreationTimestamp="2025-12-05 07:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:12.315469382 +0000 UTC m=+1326.384985714" watchObservedRunningTime="2025-12-05 07:08:12.326612463 +0000 UTC m=+1326.396128795" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.327482 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5dbk" event={"ID":"87645633-adc7-4611-ac03-0bd01623a44e","Type":"ContainerStarted","Data":"735e992792dfcfc2d0ddf77ecf8c63baaf98681ce5ab2258cb6dfd4d6bebaf8e"} Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.337181 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-2782-account-create-update-z246t" podStartSLOduration=5.337163108 podStartE2EDuration="5.337163108s" podCreationTimestamp="2025-12-05 07:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:12.333216161 +0000 UTC m=+1326.402732493" watchObservedRunningTime="2025-12-05 07:08:12.337163108 +0000 UTC m=+1326.406679440" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.345190 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ebdd-account-create-update-5f9cl" event={"ID":"f06e0616-87ca-48d5-9738-e92e1edb2ac5","Type":"ContainerStarted","Data":"f85ea07834ee61f621436527d18fcaee12cce7479f24d2ad60f921417162105f"} Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.358697 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-9xcxs" podStartSLOduration=5.358680348 podStartE2EDuration="5.358680348s" podCreationTimestamp="2025-12-05 07:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:12.358365269 +0000 UTC m=+1326.427881601" watchObservedRunningTime="2025-12-05 07:08:12.358680348 +0000 UTC m=+1326.428196680" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.380178 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-m5dbk" podStartSLOduration=5.380159227 podStartE2EDuration="5.380159227s" podCreationTimestamp="2025-12-05 07:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:12.374942807 +0000 UTC m=+1326.444459139" watchObservedRunningTime="2025-12-05 07:08:12.380159227 +0000 UTC m=+1326.449675559" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.398653 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ebdd-account-create-update-5f9cl" podStartSLOduration=5.398625876 podStartE2EDuration="5.398625876s" podCreationTimestamp="2025-12-05 07:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:12.392193612 +0000 UTC m=+1326.461709944" watchObservedRunningTime="2025-12-05 07:08:12.398625876 +0000 UTC m=+1326.468142238" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.732363 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bwb2m" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.880779 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb26cab-c196-4a45-8ba3-2d9066683eaa-operator-scripts\") pod \"0fb26cab-c196-4a45-8ba3-2d9066683eaa\" (UID: \"0fb26cab-c196-4a45-8ba3-2d9066683eaa\") " Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.881000 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp7j6\" (UniqueName: \"kubernetes.io/projected/0fb26cab-c196-4a45-8ba3-2d9066683eaa-kube-api-access-dp7j6\") pod \"0fb26cab-c196-4a45-8ba3-2d9066683eaa\" (UID: \"0fb26cab-c196-4a45-8ba3-2d9066683eaa\") " Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.881330 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fb26cab-c196-4a45-8ba3-2d9066683eaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fb26cab-c196-4a45-8ba3-2d9066683eaa" (UID: "0fb26cab-c196-4a45-8ba3-2d9066683eaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.881735 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb26cab-c196-4a45-8ba3-2d9066683eaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.890794 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb26cab-c196-4a45-8ba3-2d9066683eaa-kube-api-access-dp7j6" (OuterVolumeSpecName: "kube-api-access-dp7j6") pod "0fb26cab-c196-4a45-8ba3-2d9066683eaa" (UID: "0fb26cab-c196-4a45-8ba3-2d9066683eaa"). InnerVolumeSpecName "kube-api-access-dp7j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:12 crc kubenswrapper[4780]: I1205 07:08:12.982764 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp7j6\" (UniqueName: \"kubernetes.io/projected/0fb26cab-c196-4a45-8ba3-2d9066683eaa-kube-api-access-dp7j6\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.106658 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.355473 4780 generic.go:334] "Generic (PLEG): container finished" podID="acb4dcef-f976-4800-9e85-c59617b30727" containerID="8fffeffc9dd667b68b1d3d2ab1c350a63ba46bb0fae6eb9e60e37596aec7de8e" exitCode=0 Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.355534 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" event={"ID":"acb4dcef-f976-4800-9e85-c59617b30727","Type":"ContainerDied","Data":"8fffeffc9dd667b68b1d3d2ab1c350a63ba46bb0fae6eb9e60e37596aec7de8e"} Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.357557 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bwb2m" event={"ID":"0fb26cab-c196-4a45-8ba3-2d9066683eaa","Type":"ContainerDied","Data":"9f75fd63fd74eec15d81b42a3ce119f244eaace0b34a640d03d5c0f32e8e4127"} Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.357581 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f75fd63fd74eec15d81b42a3ce119f244eaace0b34a640d03d5c0f32e8e4127" Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.357613 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bwb2m" Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.359560 4780 generic.go:334] "Generic (PLEG): container finished" podID="0d3dd0d5-7b46-4ad7-b31d-784587823a79" containerID="d200411146961a40d5c61aaae124b891efa425fbc8abe655520e6b1f080b9824" exitCode=0 Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.359598 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2782-account-create-update-z246t" event={"ID":"0d3dd0d5-7b46-4ad7-b31d-784587823a79","Type":"ContainerDied","Data":"d200411146961a40d5c61aaae124b891efa425fbc8abe655520e6b1f080b9824"} Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.361213 4780 generic.go:334] "Generic (PLEG): container finished" podID="89a53d09-12f2-4488-814f-47114ab22120" containerID="4e55d512af9e461881a0ca014139b9f0bc12396a8a4f07b2d40fc4f1f7ca7d8d" exitCode=0 Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.361251 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xcxs" event={"ID":"89a53d09-12f2-4488-814f-47114ab22120","Type":"ContainerDied","Data":"4e55d512af9e461881a0ca014139b9f0bc12396a8a4f07b2d40fc4f1f7ca7d8d"} Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.363141 4780 generic.go:334] "Generic (PLEG): container finished" podID="87645633-adc7-4611-ac03-0bd01623a44e" containerID="735e992792dfcfc2d0ddf77ecf8c63baaf98681ce5ab2258cb6dfd4d6bebaf8e" exitCode=0 Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.363187 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5dbk" event={"ID":"87645633-adc7-4611-ac03-0bd01623a44e","Type":"ContainerDied","Data":"735e992792dfcfc2d0ddf77ecf8c63baaf98681ce5ab2258cb6dfd4d6bebaf8e"} Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.366485 4780 generic.go:334] "Generic (PLEG): container finished" podID="f06e0616-87ca-48d5-9738-e92e1edb2ac5" containerID="f85ea07834ee61f621436527d18fcaee12cce7479f24d2ad60f921417162105f" exitCode=0 Dec 05 07:08:13 crc kubenswrapper[4780]: I1205 07:08:13.366515 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ebdd-account-create-update-5f9cl" event={"ID":"f06e0616-87ca-48d5-9738-e92e1edb2ac5","Type":"ContainerDied","Data":"f85ea07834ee61f621436527d18fcaee12cce7479f24d2ad60f921417162105f"} Dec 05 07:08:14 crc kubenswrapper[4780]: I1205 07:08:14.396551 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3c0fdbf-6f18-4e15-aa04-d34623bd6453","Type":"ContainerStarted","Data":"3e11ae856e0fea1a7512ba5bda08bb05d56aa91fa888ba174c2dba9f582730a6"} Dec 05 07:08:14 crc kubenswrapper[4780]: I1205 07:08:14.827019 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5dbk" Dec 05 07:08:14 crc kubenswrapper[4780]: I1205 07:08:14.951167 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87645633-adc7-4611-ac03-0bd01623a44e-operator-scripts\") pod \"87645633-adc7-4611-ac03-0bd01623a44e\" (UID: \"87645633-adc7-4611-ac03-0bd01623a44e\") " Dec 05 07:08:14 crc kubenswrapper[4780]: I1205 07:08:14.951382 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp2ts\" (UniqueName: \"kubernetes.io/projected/87645633-adc7-4611-ac03-0bd01623a44e-kube-api-access-sp2ts\") pod \"87645633-adc7-4611-ac03-0bd01623a44e\" (UID: \"87645633-adc7-4611-ac03-0bd01623a44e\") " Dec 05 07:08:14 crc kubenswrapper[4780]: I1205 07:08:14.951987 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87645633-adc7-4611-ac03-0bd01623a44e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87645633-adc7-4611-ac03-0bd01623a44e" (UID: "87645633-adc7-4611-ac03-0bd01623a44e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:08:14 crc kubenswrapper[4780]: I1205 07:08:14.959125 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87645633-adc7-4611-ac03-0bd01623a44e-kube-api-access-sp2ts" (OuterVolumeSpecName: "kube-api-access-sp2ts") pod "87645633-adc7-4611-ac03-0bd01623a44e" (UID: "87645633-adc7-4611-ac03-0bd01623a44e"). InnerVolumeSpecName "kube-api-access-sp2ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:14 crc kubenswrapper[4780]: I1205 07:08:14.988763 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:14.999283 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2782-account-create-update-z246t" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.002256 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xcxs" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.010646 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ebdd-account-create-update-5f9cl" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.065429 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a53d09-12f2-4488-814f-47114ab22120-operator-scripts\") pod \"89a53d09-12f2-4488-814f-47114ab22120\" (UID: \"89a53d09-12f2-4488-814f-47114ab22120\") " Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.065525 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chvjv\" (UniqueName: \"kubernetes.io/projected/acb4dcef-f976-4800-9e85-c59617b30727-kube-api-access-chvjv\") pod \"acb4dcef-f976-4800-9e85-c59617b30727\" (UID: \"acb4dcef-f976-4800-9e85-c59617b30727\") " Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.065564 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3dd0d5-7b46-4ad7-b31d-784587823a79-operator-scripts\") pod \"0d3dd0d5-7b46-4ad7-b31d-784587823a79\" (UID: \"0d3dd0d5-7b46-4ad7-b31d-784587823a79\") " Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.065618 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06e0616-87ca-48d5-9738-e92e1edb2ac5-operator-scripts\") pod \"f06e0616-87ca-48d5-9738-e92e1edb2ac5\" (UID: \"f06e0616-87ca-48d5-9738-e92e1edb2ac5\") " Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.065712 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rqvp\" (UniqueName: \"kubernetes.io/projected/f06e0616-87ca-48d5-9738-e92e1edb2ac5-kube-api-access-6rqvp\") pod \"f06e0616-87ca-48d5-9738-e92e1edb2ac5\" (UID: \"f06e0616-87ca-48d5-9738-e92e1edb2ac5\") " Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.065749 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjn5x\" (UniqueName: \"kubernetes.io/projected/89a53d09-12f2-4488-814f-47114ab22120-kube-api-access-gjn5x\") pod \"89a53d09-12f2-4488-814f-47114ab22120\" (UID: \"89a53d09-12f2-4488-814f-47114ab22120\") " Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.065785 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb4dcef-f976-4800-9e85-c59617b30727-operator-scripts\") pod \"acb4dcef-f976-4800-9e85-c59617b30727\" (UID: \"acb4dcef-f976-4800-9e85-c59617b30727\") " Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.065834 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2l8j\" (UniqueName: \"kubernetes.io/projected/0d3dd0d5-7b46-4ad7-b31d-784587823a79-kube-api-access-t2l8j\") pod \"0d3dd0d5-7b46-4ad7-b31d-784587823a79\" (UID: \"0d3dd0d5-7b46-4ad7-b31d-784587823a79\") " Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.066218 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp2ts\" (UniqueName: \"kubernetes.io/projected/87645633-adc7-4611-ac03-0bd01623a44e-kube-api-access-sp2ts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.066242 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87645633-adc7-4611-ac03-0bd01623a44e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.066839 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06e0616-87ca-48d5-9738-e92e1edb2ac5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f06e0616-87ca-48d5-9738-e92e1edb2ac5" (UID: "f06e0616-87ca-48d5-9738-e92e1edb2ac5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.067347 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a53d09-12f2-4488-814f-47114ab22120-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89a53d09-12f2-4488-814f-47114ab22120" (UID: "89a53d09-12f2-4488-814f-47114ab22120"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.067353 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d3dd0d5-7b46-4ad7-b31d-784587823a79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d3dd0d5-7b46-4ad7-b31d-784587823a79" (UID: "0d3dd0d5-7b46-4ad7-b31d-784587823a79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.067746 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb4dcef-f976-4800-9e85-c59617b30727-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "acb4dcef-f976-4800-9e85-c59617b30727" (UID: "acb4dcef-f976-4800-9e85-c59617b30727"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.070307 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3dd0d5-7b46-4ad7-b31d-784587823a79-kube-api-access-t2l8j" (OuterVolumeSpecName: "kube-api-access-t2l8j") pod "0d3dd0d5-7b46-4ad7-b31d-784587823a79" (UID: "0d3dd0d5-7b46-4ad7-b31d-784587823a79"). InnerVolumeSpecName "kube-api-access-t2l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.070367 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb4dcef-f976-4800-9e85-c59617b30727-kube-api-access-chvjv" (OuterVolumeSpecName: "kube-api-access-chvjv") pod "acb4dcef-f976-4800-9e85-c59617b30727" (UID: "acb4dcef-f976-4800-9e85-c59617b30727"). InnerVolumeSpecName "kube-api-access-chvjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.074964 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a53d09-12f2-4488-814f-47114ab22120-kube-api-access-gjn5x" (OuterVolumeSpecName: "kube-api-access-gjn5x") pod "89a53d09-12f2-4488-814f-47114ab22120" (UID: "89a53d09-12f2-4488-814f-47114ab22120"). InnerVolumeSpecName "kube-api-access-gjn5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.078455 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06e0616-87ca-48d5-9738-e92e1edb2ac5-kube-api-access-6rqvp" (OuterVolumeSpecName: "kube-api-access-6rqvp") pod "f06e0616-87ca-48d5-9738-e92e1edb2ac5" (UID: "f06e0616-87ca-48d5-9738-e92e1edb2ac5"). InnerVolumeSpecName "kube-api-access-6rqvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.168169 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a53d09-12f2-4488-814f-47114ab22120-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.168233 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chvjv\" (UniqueName: \"kubernetes.io/projected/acb4dcef-f976-4800-9e85-c59617b30727-kube-api-access-chvjv\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.168245 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3dd0d5-7b46-4ad7-b31d-784587823a79-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.168254 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06e0616-87ca-48d5-9738-e92e1edb2ac5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.168265 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rqvp\" (UniqueName: \"kubernetes.io/projected/f06e0616-87ca-48d5-9738-e92e1edb2ac5-kube-api-access-6rqvp\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.168275 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjn5x\" (UniqueName: \"kubernetes.io/projected/89a53d09-12f2-4488-814f-47114ab22120-kube-api-access-gjn5x\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.168285 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acb4dcef-f976-4800-9e85-c59617b30727-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.168293 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2l8j\" (UniqueName: \"kubernetes.io/projected/0d3dd0d5-7b46-4ad7-b31d-784587823a79-kube-api-access-t2l8j\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.430445 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ebdd-account-create-update-5f9cl" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.430657 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ebdd-account-create-update-5f9cl" event={"ID":"f06e0616-87ca-48d5-9738-e92e1edb2ac5","Type":"ContainerDied","Data":"cfbaa7b869f16f7ff592e7738561cc995068f7928278c06dbcafe4a302772205"} Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.430701 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfbaa7b869f16f7ff592e7738561cc995068f7928278c06dbcafe4a302772205" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.436172 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.438487 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-79a0-account-create-update-tm7nl" event={"ID":"acb4dcef-f976-4800-9e85-c59617b30727","Type":"ContainerDied","Data":"384bf64ad68783419e746c25e80415946897f6d8a8d8c389d04a4200a8f05739"} Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.438527 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="384bf64ad68783419e746c25e80415946897f6d8a8d8c389d04a4200a8f05739" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.440089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2782-account-create-update-z246t" event={"ID":"0d3dd0d5-7b46-4ad7-b31d-784587823a79","Type":"ContainerDied","Data":"248226eeddf31af6f5711bd64b541c18f171a8d7f1c6419c08ce8caab1a44f8b"} Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.440112 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="248226eeddf31af6f5711bd64b541c18f171a8d7f1c6419c08ce8caab1a44f8b" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.440187 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2782-account-create-update-z246t" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.442090 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xcxs" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.442408 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xcxs" event={"ID":"89a53d09-12f2-4488-814f-47114ab22120","Type":"ContainerDied","Data":"045869724ba832f81e0700659ed9149715faefd7277403a4e05b2098782cc46c"} Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.442479 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="045869724ba832f81e0700659ed9149715faefd7277403a4e05b2098782cc46c" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.448890 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3c0fdbf-6f18-4e15-aa04-d34623bd6453","Type":"ContainerStarted","Data":"227c7c543aa62ca92bd68a79feb164aaca2503256418d6e0d705adab2a6335c0"} Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.450756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m5dbk" event={"ID":"87645633-adc7-4611-ac03-0bd01623a44e","Type":"ContainerDied","Data":"0a74c24df2343004dff6a0a8aa0ea55b73d81e5439e67efcbac94d5718af5cf4"} Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.450797 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a74c24df2343004dff6a0a8aa0ea55b73d81e5439e67efcbac94d5718af5cf4" Dec 05 07:08:15 crc kubenswrapper[4780]: I1205 07:08:15.450835 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m5dbk" Dec 05 07:08:16 crc kubenswrapper[4780]: I1205 07:08:16.460819 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3c0fdbf-6f18-4e15-aa04-d34623bd6453","Type":"ContainerStarted","Data":"578ee44df503fc6338209d54c428a0ed9a216217555ca56adafa4f036854ca19"} Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.497951 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3c0fdbf-6f18-4e15-aa04-d34623bd6453","Type":"ContainerStarted","Data":"beadfd54425ac2b4a5dcfc887c3f7a18dd4fc7a45fdf7cde6b922c0807058d07"} Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.500029 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="ceilometer-central-agent" containerID="cri-o://3e11ae856e0fea1a7512ba5bda08bb05d56aa91fa888ba174c2dba9f582730a6" gracePeriod=30 Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.500370 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="proxy-httpd" containerID="cri-o://beadfd54425ac2b4a5dcfc887c3f7a18dd4fc7a45fdf7cde6b922c0807058d07" gracePeriod=30 Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.500394 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.500466 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="sg-core" containerID="cri-o://578ee44df503fc6338209d54c428a0ed9a216217555ca56adafa4f036854ca19" gracePeriod=30 Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.500521 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="ceilometer-notification-agent" containerID="cri-o://227c7c543aa62ca92bd68a79feb164aaca2503256418d6e0d705adab2a6335c0" gracePeriod=30 Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.533455 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.810914795 podStartE2EDuration="6.533435564s" podCreationTimestamp="2025-12-05 07:08:11 +0000 UTC" firstStartedPulling="2025-12-05 07:08:12.191406197 +0000 UTC m=+1326.260922529" lastFinishedPulling="2025-12-05 07:08:16.913926966 +0000 UTC m=+1330.983443298" observedRunningTime="2025-12-05 07:08:17.531957854 +0000 UTC m=+1331.601474206" watchObservedRunningTime="2025-12-05 07:08:17.533435564 +0000 UTC m=+1331.602951896" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.844785 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jrvl"] Dec 05 07:08:17 crc kubenswrapper[4780]: E1205 07:08:17.845428 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb4dcef-f976-4800-9e85-c59617b30727" containerName="mariadb-account-create-update" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845443 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb4dcef-f976-4800-9e85-c59617b30727" containerName="mariadb-account-create-update" Dec 05 07:08:17 crc kubenswrapper[4780]: E1205 07:08:17.845470 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb26cab-c196-4a45-8ba3-2d9066683eaa" containerName="mariadb-database-create" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845477 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb26cab-c196-4a45-8ba3-2d9066683eaa" containerName="mariadb-database-create" Dec 05 07:08:17 crc kubenswrapper[4780]: E1205 07:08:17.845495 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3dd0d5-7b46-4ad7-b31d-784587823a79" containerName="mariadb-account-create-update" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845503 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3dd0d5-7b46-4ad7-b31d-784587823a79" containerName="mariadb-account-create-update" Dec 05 07:08:17 crc kubenswrapper[4780]: E1205 07:08:17.845512 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06e0616-87ca-48d5-9738-e92e1edb2ac5" containerName="mariadb-account-create-update" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845521 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06e0616-87ca-48d5-9738-e92e1edb2ac5" containerName="mariadb-account-create-update" Dec 05 07:08:17 crc kubenswrapper[4780]: E1205 07:08:17.845530 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87645633-adc7-4611-ac03-0bd01623a44e" containerName="mariadb-database-create" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845540 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="87645633-adc7-4611-ac03-0bd01623a44e" containerName="mariadb-database-create" Dec 05 07:08:17 crc kubenswrapper[4780]: E1205 07:08:17.845553 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a53d09-12f2-4488-814f-47114ab22120" containerName="mariadb-database-create" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845559 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a53d09-12f2-4488-814f-47114ab22120" containerName="mariadb-database-create" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845741 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06e0616-87ca-48d5-9738-e92e1edb2ac5" containerName="mariadb-account-create-update" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845751 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3dd0d5-7b46-4ad7-b31d-784587823a79" containerName="mariadb-account-create-update" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845766 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fb26cab-c196-4a45-8ba3-2d9066683eaa" containerName="mariadb-database-create" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845774 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="87645633-adc7-4611-ac03-0bd01623a44e" containerName="mariadb-database-create" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845792 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb4dcef-f976-4800-9e85-c59617b30727" containerName="mariadb-account-create-update" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.845801 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a53d09-12f2-4488-814f-47114ab22120" containerName="mariadb-database-create" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.846405 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.850851 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.851115 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gq457" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.853398 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.873998 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jrvl"] Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.920567 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf2mt\" (UniqueName: \"kubernetes.io/projected/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-kube-api-access-tf2mt\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.920635 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.920751 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-config-data\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:17 crc kubenswrapper[4780]: I1205 07:08:17.921026 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-scripts\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.022983 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-scripts\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.023062 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf2mt\" (UniqueName: \"kubernetes.io/projected/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-kube-api-access-tf2mt\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.023108 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.023130 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-config-data\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.031776 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-scripts\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.031848 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.033385 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-config-data\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.043682 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf2mt\" (UniqueName: \"kubernetes.io/projected/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-kube-api-access-tf2mt\") pod \"nova-cell0-conductor-db-sync-9jrvl\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.162501 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.510209 4780 generic.go:334] "Generic (PLEG): container finished" podID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerID="beadfd54425ac2b4a5dcfc887c3f7a18dd4fc7a45fdf7cde6b922c0807058d07" exitCode=0 Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.510569 4780 generic.go:334] "Generic (PLEG): container finished" podID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerID="578ee44df503fc6338209d54c428a0ed9a216217555ca56adafa4f036854ca19" exitCode=2 Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.510581 4780 generic.go:334] "Generic (PLEG): container finished" podID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerID="227c7c543aa62ca92bd68a79feb164aaca2503256418d6e0d705adab2a6335c0" exitCode=0 Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.510591 4780 generic.go:334] "Generic (PLEG): container finished" podID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerID="3e11ae856e0fea1a7512ba5bda08bb05d56aa91fa888ba174c2dba9f582730a6" exitCode=0 Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.510424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3c0fdbf-6f18-4e15-aa04-d34623bd6453","Type":"ContainerDied","Data":"beadfd54425ac2b4a5dcfc887c3f7a18dd4fc7a45fdf7cde6b922c0807058d07"} Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.510643 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3c0fdbf-6f18-4e15-aa04-d34623bd6453","Type":"ContainerDied","Data":"578ee44df503fc6338209d54c428a0ed9a216217555ca56adafa4f036854ca19"} Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.510660 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3c0fdbf-6f18-4e15-aa04-d34623bd6453","Type":"ContainerDied","Data":"227c7c543aa62ca92bd68a79feb164aaca2503256418d6e0d705adab2a6335c0"} Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.510672 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3c0fdbf-6f18-4e15-aa04-d34623bd6453","Type":"ContainerDied","Data":"3e11ae856e0fea1a7512ba5bda08bb05d56aa91fa888ba174c2dba9f582730a6"} Dec 05 07:08:18 crc kubenswrapper[4780]: I1205 07:08:18.612287 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jrvl"] Dec 05 07:08:18 crc kubenswrapper[4780]: W1205 07:08:18.613713 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a3bc8e1_c401_4c09_a8bd_91fcb98e96e3.slice/crio-760fdfc60ea7b6cf7e82b6ffb22925c117f922fb00f5ddd10ca6132145980b4b WatchSource:0}: Error finding container 760fdfc60ea7b6cf7e82b6ffb22925c117f922fb00f5ddd10ca6132145980b4b: Status 404 returned error can't find the container with id 760fdfc60ea7b6cf7e82b6ffb22925c117f922fb00f5ddd10ca6132145980b4b Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.424837 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.527609 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3c0fdbf-6f18-4e15-aa04-d34623bd6453","Type":"ContainerDied","Data":"987c1b676e01e297b68e25a27148106f0ebc743f60f4bfbd3478e29bf64278c0"} Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.528087 4780 scope.go:117] "RemoveContainer" containerID="beadfd54425ac2b4a5dcfc887c3f7a18dd4fc7a45fdf7cde6b922c0807058d07" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.528233 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.532105 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jrvl" event={"ID":"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3","Type":"ContainerStarted","Data":"760fdfc60ea7b6cf7e82b6ffb22925c117f922fb00f5ddd10ca6132145980b4b"} Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.550499 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-log-httpd\") pod \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.550568 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-run-httpd\") pod \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.550622 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-config-data\") pod \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.550783 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zsxf\" (UniqueName: \"kubernetes.io/projected/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-kube-api-access-7zsxf\") pod \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.550813 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-sg-core-conf-yaml\") pod \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.551443 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-scripts\") pod \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.551487 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-combined-ca-bundle\") pod \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\" (UID: \"b3c0fdbf-6f18-4e15-aa04-d34623bd6453\") " Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.551870 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b3c0fdbf-6f18-4e15-aa04-d34623bd6453" (UID: "b3c0fdbf-6f18-4e15-aa04-d34623bd6453"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.552017 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.552328 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b3c0fdbf-6f18-4e15-aa04-d34623bd6453" (UID: "b3c0fdbf-6f18-4e15-aa04-d34623bd6453"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.558333 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-kube-api-access-7zsxf" (OuterVolumeSpecName: "kube-api-access-7zsxf") pod "b3c0fdbf-6f18-4e15-aa04-d34623bd6453" (UID: "b3c0fdbf-6f18-4e15-aa04-d34623bd6453"). InnerVolumeSpecName "kube-api-access-7zsxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.558604 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-scripts" (OuterVolumeSpecName: "scripts") pod "b3c0fdbf-6f18-4e15-aa04-d34623bd6453" (UID: "b3c0fdbf-6f18-4e15-aa04-d34623bd6453"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.565387 4780 scope.go:117] "RemoveContainer" containerID="578ee44df503fc6338209d54c428a0ed9a216217555ca56adafa4f036854ca19" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.583758 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b3c0fdbf-6f18-4e15-aa04-d34623bd6453" (UID: "b3c0fdbf-6f18-4e15-aa04-d34623bd6453"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.646128 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3c0fdbf-6f18-4e15-aa04-d34623bd6453" (UID: "b3c0fdbf-6f18-4e15-aa04-d34623bd6453"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.653606 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.653648 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.653664 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zsxf\" (UniqueName: \"kubernetes.io/projected/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-kube-api-access-7zsxf\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.653678 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.653690 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.675259 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-config-data" (OuterVolumeSpecName: "config-data") pod "b3c0fdbf-6f18-4e15-aa04-d34623bd6453" (UID: "b3c0fdbf-6f18-4e15-aa04-d34623bd6453"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.684513 4780 scope.go:117] "RemoveContainer" containerID="227c7c543aa62ca92bd68a79feb164aaca2503256418d6e0d705adab2a6335c0" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.727141 4780 scope.go:117] "RemoveContainer" containerID="3e11ae856e0fea1a7512ba5bda08bb05d56aa91fa888ba174c2dba9f582730a6" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.755054 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3c0fdbf-6f18-4e15-aa04-d34623bd6453-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.880625 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.892386 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.901153 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:19 crc kubenswrapper[4780]: E1205 07:08:19.901650 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="proxy-httpd" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.901676 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="proxy-httpd" Dec 05 07:08:19 crc kubenswrapper[4780]: E1205 07:08:19.901704 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="sg-core" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.901712 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="sg-core" Dec 05 07:08:19 crc kubenswrapper[4780]: E1205 07:08:19.901735 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="ceilometer-notification-agent" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.901744 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="ceilometer-notification-agent" Dec 05 07:08:19 crc kubenswrapper[4780]: E1205 07:08:19.901762 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="ceilometer-central-agent" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.901767 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="ceilometer-central-agent" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.901982 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="proxy-httpd" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.902001 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="ceilometer-central-agent" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.902012 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="sg-core" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.902027 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" containerName="ceilometer-notification-agent" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.905337 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.910332 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.910642 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.931844 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.959265 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-run-httpd\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.959351 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.959375 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-log-httpd\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.959536 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-scripts\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.959582 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jb5\" (UniqueName: \"kubernetes.io/projected/11d641dc-6c58-45f8-a6b8-a39a31c06331-kube-api-access-d4jb5\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.959609 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-config-data\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:19 crc kubenswrapper[4780]: I1205 07:08:19.959726 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.062363 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.062453 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-log-httpd\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.062530 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-scripts\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.062569 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4jb5\" (UniqueName: \"kubernetes.io/projected/11d641dc-6c58-45f8-a6b8-a39a31c06331-kube-api-access-d4jb5\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.062619 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-config-data\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.062693 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.062811 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-run-httpd\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.063021 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-log-httpd\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.063557 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-run-httpd\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.067942 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.067990 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-scripts\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.068605 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.070506 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-config-data\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.081648 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4jb5\" (UniqueName: \"kubernetes.io/projected/11d641dc-6c58-45f8-a6b8-a39a31c06331-kube-api-access-d4jb5\") pod \"ceilometer-0\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.163865 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c0fdbf-6f18-4e15-aa04-d34623bd6453" path="/var/lib/kubelet/pods/b3c0fdbf-6f18-4e15-aa04-d34623bd6453/volumes" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.238000 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:20 crc kubenswrapper[4780]: I1205 07:08:20.688705 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:20 crc kubenswrapper[4780]: W1205 07:08:20.688780 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11d641dc_6c58_45f8_a6b8_a39a31c06331.slice/crio-e015c55c0998a814e6d76c491a273218d0335d0a9dfbea3606ed4cdeb67bd175 WatchSource:0}: Error finding container e015c55c0998a814e6d76c491a273218d0335d0a9dfbea3606ed4cdeb67bd175: Status 404 returned error can't find the container with id e015c55c0998a814e6d76c491a273218d0335d0a9dfbea3606ed4cdeb67bd175 Dec 05 07:08:21 crc kubenswrapper[4780]: I1205 07:08:21.557680 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11d641dc-6c58-45f8-a6b8-a39a31c06331","Type":"ContainerStarted","Data":"e015c55c0998a814e6d76c491a273218d0335d0a9dfbea3606ed4cdeb67bd175"} Dec 05 07:08:22 crc kubenswrapper[4780]: I1205 07:08:22.592901 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11d641dc-6c58-45f8-a6b8-a39a31c06331","Type":"ContainerStarted","Data":"a199a9f7025b4b11cb11164273274e5fdfb78326952d72579cc93462c59f176d"} Dec 05 07:08:24 crc kubenswrapper[4780]: I1205 07:08:24.127819 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:25 crc kubenswrapper[4780]: I1205 07:08:25.409728 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:08:25 crc kubenswrapper[4780]: I1205 07:08:25.410322 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="83acdff4-818d-4715-875b-0851c4fa04f0" containerName="glance-log" containerID="cri-o://7d0604fcc8d102de413cadab5cfc7ff08b472baf10c9737ead45b9bdc22e6270" gracePeriod=30 Dec 05 07:08:25 crc kubenswrapper[4780]: I1205 07:08:25.410460 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="83acdff4-818d-4715-875b-0851c4fa04f0" containerName="glance-httpd" containerID="cri-o://bca6744717308a0be549da78a3f61dff30b42adb90908a02ce716454f8e3df70" gracePeriod=30 Dec 05 07:08:25 crc kubenswrapper[4780]: I1205 07:08:25.637609 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83acdff4-818d-4715-875b-0851c4fa04f0","Type":"ContainerDied","Data":"7d0604fcc8d102de413cadab5cfc7ff08b472baf10c9737ead45b9bdc22e6270"} Dec 05 07:08:25 crc kubenswrapper[4780]: I1205 07:08:25.637623 4780 generic.go:334] "Generic (PLEG): container finished" podID="83acdff4-818d-4715-875b-0851c4fa04f0" containerID="7d0604fcc8d102de413cadab5cfc7ff08b472baf10c9737ead45b9bdc22e6270" exitCode=143 Dec 05 07:08:26 crc kubenswrapper[4780]: I1205 07:08:26.362358 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:08:26 crc kubenswrapper[4780]: I1205 07:08:26.364290 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5738fd8e-a30a-4470-8bf3-47c00286f574" containerName="glance-log" containerID="cri-o://2ab4dd976fe70a55ae448cbd52e5d5965f6936762e89b58a57dd0e7681870ad2" gracePeriod=30 Dec 05 07:08:26 crc kubenswrapper[4780]: I1205 07:08:26.364573 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5738fd8e-a30a-4470-8bf3-47c00286f574" containerName="glance-httpd" containerID="cri-o://bd58ca4bcdb9ce3889888c2869ac2ebaa3277c1937950ee6e41b129e6d72bae6" gracePeriod=30 Dec 05 07:08:26 crc kubenswrapper[4780]: I1205 07:08:26.647999 4780 generic.go:334] "Generic (PLEG): container finished" podID="5738fd8e-a30a-4470-8bf3-47c00286f574" containerID="2ab4dd976fe70a55ae448cbd52e5d5965f6936762e89b58a57dd0e7681870ad2" exitCode=143 Dec 05 07:08:26 crc kubenswrapper[4780]: I1205 07:08:26.648043 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5738fd8e-a30a-4470-8bf3-47c00286f574","Type":"ContainerDied","Data":"2ab4dd976fe70a55ae448cbd52e5d5965f6936762e89b58a57dd0e7681870ad2"} Dec 05 07:08:27 crc kubenswrapper[4780]: I1205 07:08:27.658935 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11d641dc-6c58-45f8-a6b8-a39a31c06331","Type":"ContainerStarted","Data":"a84d5876c036a81fd57cb2b7bd44a7ff01c21c900dd2f0d600a480c89ad8b457"} Dec 05 07:08:27 crc kubenswrapper[4780]: I1205 07:08:27.662235 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jrvl" event={"ID":"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3","Type":"ContainerStarted","Data":"28ccece98c496d594b51ff9c26f60e3f17e74c62271711c3b2b7f2111e13a950"} Dec 05 07:08:27 crc kubenswrapper[4780]: I1205 07:08:27.684397 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-9jrvl" podStartSLOduration=2.05678672 podStartE2EDuration="10.684375561s" podCreationTimestamp="2025-12-05 07:08:17 +0000 UTC" firstStartedPulling="2025-12-05 07:08:18.615659773 +0000 UTC m=+1332.685176105" lastFinishedPulling="2025-12-05 07:08:27.243248624 +0000 UTC m=+1341.312764946" observedRunningTime="2025-12-05 07:08:27.67581393 +0000 UTC m=+1341.745330272" watchObservedRunningTime="2025-12-05 07:08:27.684375561 +0000 UTC m=+1341.753891893" Dec 05 07:08:28 crc kubenswrapper[4780]: I1205 07:08:28.709350 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11d641dc-6c58-45f8-a6b8-a39a31c06331","Type":"ContainerStarted","Data":"172ebb176734ee191e1caa2015c4ab71d57ad5faf21dac1ecb49bcb38bdfb66f"} Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.731769 4780 generic.go:334] "Generic (PLEG): container finished" podID="83acdff4-818d-4715-875b-0851c4fa04f0" containerID="bca6744717308a0be549da78a3f61dff30b42adb90908a02ce716454f8e3df70" exitCode=0 Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.732221 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83acdff4-818d-4715-875b-0851c4fa04f0","Type":"ContainerDied","Data":"bca6744717308a0be549da78a3f61dff30b42adb90908a02ce716454f8e3df70"} Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.745242 4780 generic.go:334] "Generic (PLEG): container finished" podID="5738fd8e-a30a-4470-8bf3-47c00286f574" containerID="bd58ca4bcdb9ce3889888c2869ac2ebaa3277c1937950ee6e41b129e6d72bae6" exitCode=0 Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.745293 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5738fd8e-a30a-4470-8bf3-47c00286f574","Type":"ContainerDied","Data":"bd58ca4bcdb9ce3889888c2869ac2ebaa3277c1937950ee6e41b129e6d72bae6"} Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.767947 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.779006 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-scripts\") pod \"83acdff4-818d-4715-875b-0851c4fa04f0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.781641 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-httpd-run\") pod \"83acdff4-818d-4715-875b-0851c4fa04f0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.781702 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"83acdff4-818d-4715-875b-0851c4fa04f0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.781778 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-combined-ca-bundle\") pod \"83acdff4-818d-4715-875b-0851c4fa04f0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.781806 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsdf\" (UniqueName: \"kubernetes.io/projected/83acdff4-818d-4715-875b-0851c4fa04f0-kube-api-access-9vsdf\") pod \"83acdff4-818d-4715-875b-0851c4fa04f0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.787638 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-scripts" (OuterVolumeSpecName: "scripts") pod "83acdff4-818d-4715-875b-0851c4fa04f0" (UID: "83acdff4-818d-4715-875b-0851c4fa04f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.789075 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "83acdff4-818d-4715-875b-0851c4fa04f0" (UID: "83acdff4-818d-4715-875b-0851c4fa04f0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.791333 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83acdff4-818d-4715-875b-0851c4fa04f0-kube-api-access-9vsdf" (OuterVolumeSpecName: "kube-api-access-9vsdf") pod "83acdff4-818d-4715-875b-0851c4fa04f0" (UID: "83acdff4-818d-4715-875b-0851c4fa04f0"). InnerVolumeSpecName "kube-api-access-9vsdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.840078 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "83acdff4-818d-4715-875b-0851c4fa04f0" (UID: "83acdff4-818d-4715-875b-0851c4fa04f0"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.848822 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83acdff4-818d-4715-875b-0851c4fa04f0" (UID: "83acdff4-818d-4715-875b-0851c4fa04f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.895140 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-public-tls-certs\") pod \"83acdff4-818d-4715-875b-0851c4fa04f0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.895252 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-logs\") pod \"83acdff4-818d-4715-875b-0851c4fa04f0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.895404 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-config-data\") pod \"83acdff4-818d-4715-875b-0851c4fa04f0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.896027 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.896045 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.896069 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.896082 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.896096 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vsdf\" (UniqueName: \"kubernetes.io/projected/83acdff4-818d-4715-875b-0851c4fa04f0-kube-api-access-9vsdf\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.896791 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-logs" (OuterVolumeSpecName: "logs") pod "83acdff4-818d-4715-875b-0851c4fa04f0" (UID: "83acdff4-818d-4715-875b-0851c4fa04f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.920899 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 05 07:08:29 crc kubenswrapper[4780]: I1205 07:08:29.961545 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-config-data" (OuterVolumeSpecName: "config-data") pod "83acdff4-818d-4715-875b-0851c4fa04f0" (UID: "83acdff4-818d-4715-875b-0851c4fa04f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:29.997036 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "83acdff4-818d-4715-875b-0851c4fa04f0" (UID: "83acdff4-818d-4715-875b-0851c4fa04f0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:29.997516 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-public-tls-certs\") pod \"83acdff4-818d-4715-875b-0851c4fa04f0\" (UID: \"83acdff4-818d-4715-875b-0851c4fa04f0\") " Dec 05 07:08:30 crc kubenswrapper[4780]: W1205 07:08:29.997637 4780 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/83acdff4-818d-4715-875b-0851c4fa04f0/volumes/kubernetes.io~secret/public-tls-certs Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:29.997654 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "83acdff4-818d-4715-875b-0851c4fa04f0" (UID: "83acdff4-818d-4715-875b-0851c4fa04f0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:29.998224 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:29.998238 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:29.998250 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83acdff4-818d-4715-875b-0851c4fa04f0-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:29.998259 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83acdff4-818d-4715-875b-0851c4fa04f0-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.152424 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.206160 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-combined-ca-bundle\") pod \"5738fd8e-a30a-4470-8bf3-47c00286f574\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.236321 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5738fd8e-a30a-4470-8bf3-47c00286f574" (UID: "5738fd8e-a30a-4470-8bf3-47c00286f574"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.307762 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"5738fd8e-a30a-4470-8bf3-47c00286f574\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.308175 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-httpd-run\") pod \"5738fd8e-a30a-4470-8bf3-47c00286f574\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.308213 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sbvc\" (UniqueName: \"kubernetes.io/projected/5738fd8e-a30a-4470-8bf3-47c00286f574-kube-api-access-6sbvc\") pod \"5738fd8e-a30a-4470-8bf3-47c00286f574\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.308318 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-config-data\") pod \"5738fd8e-a30a-4470-8bf3-47c00286f574\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.308366 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-logs\") pod \"5738fd8e-a30a-4470-8bf3-47c00286f574\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.308412 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-scripts\") pod \"5738fd8e-a30a-4470-8bf3-47c00286f574\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.308467 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-internal-tls-certs\") pod \"5738fd8e-a30a-4470-8bf3-47c00286f574\" (UID: \"5738fd8e-a30a-4470-8bf3-47c00286f574\") " Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.308814 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.309699 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5738fd8e-a30a-4470-8bf3-47c00286f574" (UID: "5738fd8e-a30a-4470-8bf3-47c00286f574"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.310176 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-logs" (OuterVolumeSpecName: "logs") pod "5738fd8e-a30a-4470-8bf3-47c00286f574" (UID: "5738fd8e-a30a-4470-8bf3-47c00286f574"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.313156 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "5738fd8e-a30a-4470-8bf3-47c00286f574" (UID: "5738fd8e-a30a-4470-8bf3-47c00286f574"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.313341 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5738fd8e-a30a-4470-8bf3-47c00286f574-kube-api-access-6sbvc" (OuterVolumeSpecName: "kube-api-access-6sbvc") pod "5738fd8e-a30a-4470-8bf3-47c00286f574" (UID: "5738fd8e-a30a-4470-8bf3-47c00286f574"). InnerVolumeSpecName "kube-api-access-6sbvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.315126 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-scripts" (OuterVolumeSpecName: "scripts") pod "5738fd8e-a30a-4470-8bf3-47c00286f574" (UID: "5738fd8e-a30a-4470-8bf3-47c00286f574"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.359807 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5738fd8e-a30a-4470-8bf3-47c00286f574" (UID: "5738fd8e-a30a-4470-8bf3-47c00286f574"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.369691 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-config-data" (OuterVolumeSpecName: "config-data") pod "5738fd8e-a30a-4470-8bf3-47c00286f574" (UID: "5738fd8e-a30a-4470-8bf3-47c00286f574"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.410619 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.410659 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.410670 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.410681 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5738fd8e-a30a-4470-8bf3-47c00286f574-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.410717 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.410730 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5738fd8e-a30a-4470-8bf3-47c00286f574-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.410742 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sbvc\" (UniqueName: \"kubernetes.io/projected/5738fd8e-a30a-4470-8bf3-47c00286f574-kube-api-access-6sbvc\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.429657 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.512046 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.757591 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83acdff4-818d-4715-875b-0851c4fa04f0","Type":"ContainerDied","Data":"e21405c9e8e46ab829406f66f28af255b53885faf35febac268bbe84c118de4c"} Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.757656 4780 scope.go:117] "RemoveContainer" containerID="bca6744717308a0be549da78a3f61dff30b42adb90908a02ce716454f8e3df70" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.757783 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.767598 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5738fd8e-a30a-4470-8bf3-47c00286f574","Type":"ContainerDied","Data":"7776cd34a8d8963b152da41c2216abf5cea97fe65c74cb98f9e7290b0c0bf392"} Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.767614 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.772097 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11d641dc-6c58-45f8-a6b8-a39a31c06331","Type":"ContainerStarted","Data":"c59fceb83d3601f51f3655960c467caecb3313de2e95c9cf6e0840b622491da9"} Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.772364 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="ceilometer-central-agent" containerID="cri-o://a199a9f7025b4b11cb11164273274e5fdfb78326952d72579cc93462c59f176d" gracePeriod=30 Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.772609 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="sg-core" containerID="cri-o://172ebb176734ee191e1caa2015c4ab71d57ad5faf21dac1ecb49bcb38bdfb66f" gracePeriod=30 Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.772643 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.772698 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="proxy-httpd" containerID="cri-o://c59fceb83d3601f51f3655960c467caecb3313de2e95c9cf6e0840b622491da9" gracePeriod=30 Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.772726 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="ceilometer-notification-agent" containerID="cri-o://a84d5876c036a81fd57cb2b7bd44a7ff01c21c900dd2f0d600a480c89ad8b457" gracePeriod=30 Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.790871 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.806068 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.810458 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.04039566 podStartE2EDuration="11.810440984s" podCreationTimestamp="2025-12-05 07:08:19 +0000 UTC" firstStartedPulling="2025-12-05 07:08:20.693588545 +0000 UTC m=+1334.763104877" lastFinishedPulling="2025-12-05 07:08:29.463633859 +0000 UTC m=+1343.533150201" observedRunningTime="2025-12-05 07:08:30.809077997 +0000 UTC m=+1344.878594329" watchObservedRunningTime="2025-12-05 07:08:30.810440984 +0000 UTC m=+1344.879957306" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.815089 4780 scope.go:117] "RemoveContainer" containerID="7d0604fcc8d102de413cadab5cfc7ff08b472baf10c9737ead45b9bdc22e6270" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.859928 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:08:30 crc kubenswrapper[4780]: E1205 07:08:30.860462 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83acdff4-818d-4715-875b-0851c4fa04f0" containerName="glance-httpd" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.860487 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="83acdff4-818d-4715-875b-0851c4fa04f0" containerName="glance-httpd" Dec 05 07:08:30 crc kubenswrapper[4780]: E1205 07:08:30.860507 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5738fd8e-a30a-4470-8bf3-47c00286f574" containerName="glance-log" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.860515 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5738fd8e-a30a-4470-8bf3-47c00286f574" containerName="glance-log" Dec 05 07:08:30 crc kubenswrapper[4780]: E1205 07:08:30.860543 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83acdff4-818d-4715-875b-0851c4fa04f0" containerName="glance-log" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.860551 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="83acdff4-818d-4715-875b-0851c4fa04f0" containerName="glance-log" Dec 05 07:08:30 crc kubenswrapper[4780]: E1205 07:08:30.860585 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5738fd8e-a30a-4470-8bf3-47c00286f574" containerName="glance-httpd" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.860594 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5738fd8e-a30a-4470-8bf3-47c00286f574" containerName="glance-httpd" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.860836 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5738fd8e-a30a-4470-8bf3-47c00286f574" containerName="glance-httpd" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.860859 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5738fd8e-a30a-4470-8bf3-47c00286f574" containerName="glance-log" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.860895 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="83acdff4-818d-4715-875b-0851c4fa04f0" containerName="glance-httpd" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.860917 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="83acdff4-818d-4715-875b-0851c4fa04f0" containerName="glance-log" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.863333 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.866873 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.867175 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gkns2" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.867348 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.867362 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.883159 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.903997 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.926113 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.957602 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.974386 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.980413 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.980686 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 07:08:30 crc kubenswrapper[4780]: I1205 07:08:30.994721 4780 scope.go:117] "RemoveContainer" containerID="bd58ca4bcdb9ce3889888c2869ac2ebaa3277c1937950ee6e41b129e6d72bae6" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.016347 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.019455 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.019504 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4v8\" (UniqueName: \"kubernetes.io/projected/2a294e09-ff41-4fcc-81f4-2a674c77c239-kube-api-access-vj4v8\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.019546 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-scripts\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.019567 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.019600 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.019636 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-logs\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.019675 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.019704 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-config-data\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.058055 4780 scope.go:117] "RemoveContainer" containerID="2ab4dd976fe70a55ae448cbd52e5d5965f6936762e89b58a57dd0e7681870ad2" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121110 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-scripts\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121176 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121221 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fj7\" (UniqueName: \"kubernetes.io/projected/43c681b8-252b-4d1a-8293-27528bc83ed8-kube-api-access-g5fj7\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121252 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121275 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121298 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121315 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121337 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-logs\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121382 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-logs\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121401 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121458 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121494 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121513 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-config-data\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121585 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121609 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.121641 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4v8\" (UniqueName: \"kubernetes.io/projected/2a294e09-ff41-4fcc-81f4-2a674c77c239-kube-api-access-vj4v8\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.122430 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-logs\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.122535 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.122589 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.127703 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.128949 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-scripts\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.129111 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.129911 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-config-data\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.139776 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4v8\" (UniqueName: \"kubernetes.io/projected/2a294e09-ff41-4fcc-81f4-2a674c77c239-kube-api-access-vj4v8\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.151685 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.223254 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.223327 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.223355 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.223375 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-logs\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.223440 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.223535 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.223611 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.223754 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5fj7\" (UniqueName: \"kubernetes.io/projected/43c681b8-252b-4d1a-8293-27528bc83ed8-kube-api-access-g5fj7\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.224768 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.226519 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-logs\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.227173 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.227919 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.229982 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.230177 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.245989 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5fj7\" (UniqueName: \"kubernetes.io/projected/43c681b8-252b-4d1a-8293-27528bc83ed8-kube-api-access-g5fj7\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.250675 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.259475 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.282106 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.306669 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.783825 4780 generic.go:334] "Generic (PLEG): container finished" podID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerID="c59fceb83d3601f51f3655960c467caecb3313de2e95c9cf6e0840b622491da9" exitCode=0 Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.784179 4780 generic.go:334] "Generic (PLEG): container finished" podID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerID="172ebb176734ee191e1caa2015c4ab71d57ad5faf21dac1ecb49bcb38bdfb66f" exitCode=2 Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.783911 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11d641dc-6c58-45f8-a6b8-a39a31c06331","Type":"ContainerDied","Data":"c59fceb83d3601f51f3655960c467caecb3313de2e95c9cf6e0840b622491da9"} Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.784234 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11d641dc-6c58-45f8-a6b8-a39a31c06331","Type":"ContainerDied","Data":"172ebb176734ee191e1caa2015c4ab71d57ad5faf21dac1ecb49bcb38bdfb66f"} Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.784256 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11d641dc-6c58-45f8-a6b8-a39a31c06331","Type":"ContainerDied","Data":"a84d5876c036a81fd57cb2b7bd44a7ff01c21c900dd2f0d600a480c89ad8b457"} Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.784193 4780 generic.go:334] "Generic (PLEG): container finished" podID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerID="a84d5876c036a81fd57cb2b7bd44a7ff01c21c900dd2f0d600a480c89ad8b457" exitCode=0 Dec 05 07:08:31 crc kubenswrapper[4780]: I1205 07:08:31.905872 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:08:31 crc kubenswrapper[4780]: W1205 07:08:31.907645 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a294e09_ff41_4fcc_81f4_2a674c77c239.slice/crio-894447edf40fe1f8908910d228e6331eab2f8f11df72629a8b6996da472579d9 WatchSource:0}: Error finding container 894447edf40fe1f8908910d228e6331eab2f8f11df72629a8b6996da472579d9: Status 404 returned error can't find the container with id 894447edf40fe1f8908910d228e6331eab2f8f11df72629a8b6996da472579d9 Dec 05 07:08:32 crc kubenswrapper[4780]: W1205 07:08:32.014574 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c681b8_252b_4d1a_8293_27528bc83ed8.slice/crio-af1117c2af1193882b9ea3f194b375432d358222d027e59524cbb53c4a965fd5 WatchSource:0}: Error finding container af1117c2af1193882b9ea3f194b375432d358222d027e59524cbb53c4a965fd5: Status 404 returned error can't find the container with id af1117c2af1193882b9ea3f194b375432d358222d027e59524cbb53c4a965fd5 Dec 05 07:08:32 crc kubenswrapper[4780]: I1205 07:08:32.020956 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:08:32 crc kubenswrapper[4780]: I1205 07:08:32.150973 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5738fd8e-a30a-4470-8bf3-47c00286f574" path="/var/lib/kubelet/pods/5738fd8e-a30a-4470-8bf3-47c00286f574/volumes" Dec 05 07:08:32 crc kubenswrapper[4780]: I1205 07:08:32.152363 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83acdff4-818d-4715-875b-0851c4fa04f0" path="/var/lib/kubelet/pods/83acdff4-818d-4715-875b-0851c4fa04f0/volumes" Dec 05 07:08:32 crc kubenswrapper[4780]: I1205 07:08:32.806798 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a294e09-ff41-4fcc-81f4-2a674c77c239","Type":"ContainerStarted","Data":"e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208"} Dec 05 07:08:32 crc kubenswrapper[4780]: I1205 07:08:32.807136 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a294e09-ff41-4fcc-81f4-2a674c77c239","Type":"ContainerStarted","Data":"894447edf40fe1f8908910d228e6331eab2f8f11df72629a8b6996da472579d9"} Dec 05 07:08:32 crc kubenswrapper[4780]: I1205 07:08:32.808090 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43c681b8-252b-4d1a-8293-27528bc83ed8","Type":"ContainerStarted","Data":"4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14"} Dec 05 07:08:32 crc kubenswrapper[4780]: I1205 07:08:32.808110 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43c681b8-252b-4d1a-8293-27528bc83ed8","Type":"ContainerStarted","Data":"af1117c2af1193882b9ea3f194b375432d358222d027e59524cbb53c4a965fd5"} Dec 05 07:08:33 crc kubenswrapper[4780]: I1205 07:08:33.821337 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43c681b8-252b-4d1a-8293-27528bc83ed8","Type":"ContainerStarted","Data":"64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb"} Dec 05 07:08:33 crc kubenswrapper[4780]: I1205 07:08:33.823393 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a294e09-ff41-4fcc-81f4-2a674c77c239","Type":"ContainerStarted","Data":"0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6"} Dec 05 07:08:33 crc kubenswrapper[4780]: I1205 07:08:33.844979 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.844959616 podStartE2EDuration="3.844959616s" podCreationTimestamp="2025-12-05 07:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:33.842284714 +0000 UTC m=+1347.911801056" watchObservedRunningTime="2025-12-05 07:08:33.844959616 +0000 UTC m=+1347.914475948" Dec 05 07:08:33 crc kubenswrapper[4780]: I1205 07:08:33.873357 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.8733397910000003 podStartE2EDuration="3.873339791s" podCreationTimestamp="2025-12-05 07:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:33.871471201 +0000 UTC m=+1347.940987533" watchObservedRunningTime="2025-12-05 07:08:33.873339791 +0000 UTC m=+1347.942856123" Dec 05 07:08:34 crc kubenswrapper[4780]: I1205 07:08:34.835894 4780 generic.go:334] "Generic (PLEG): container finished" podID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerID="a199a9f7025b4b11cb11164273274e5fdfb78326952d72579cc93462c59f176d" exitCode=0 Dec 05 07:08:34 crc kubenswrapper[4780]: I1205 07:08:34.836014 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11d641dc-6c58-45f8-a6b8-a39a31c06331","Type":"ContainerDied","Data":"a199a9f7025b4b11cb11164273274e5fdfb78326952d72579cc93462c59f176d"} Dec 05 07:08:34 crc kubenswrapper[4780]: I1205 07:08:34.963809 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.141916 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-run-httpd\") pod \"11d641dc-6c58-45f8-a6b8-a39a31c06331\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.142018 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-scripts\") pod \"11d641dc-6c58-45f8-a6b8-a39a31c06331\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.142246 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "11d641dc-6c58-45f8-a6b8-a39a31c06331" (UID: "11d641dc-6c58-45f8-a6b8-a39a31c06331"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.142164 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-config-data\") pod \"11d641dc-6c58-45f8-a6b8-a39a31c06331\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.142351 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-combined-ca-bundle\") pod \"11d641dc-6c58-45f8-a6b8-a39a31c06331\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.142484 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4jb5\" (UniqueName: \"kubernetes.io/projected/11d641dc-6c58-45f8-a6b8-a39a31c06331-kube-api-access-d4jb5\") pod \"11d641dc-6c58-45f8-a6b8-a39a31c06331\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.142535 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-log-httpd\") pod \"11d641dc-6c58-45f8-a6b8-a39a31c06331\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.142559 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-sg-core-conf-yaml\") pod \"11d641dc-6c58-45f8-a6b8-a39a31c06331\" (UID: \"11d641dc-6c58-45f8-a6b8-a39a31c06331\") " Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.143060 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "11d641dc-6c58-45f8-a6b8-a39a31c06331" (UID: "11d641dc-6c58-45f8-a6b8-a39a31c06331"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.143616 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.143648 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11d641dc-6c58-45f8-a6b8-a39a31c06331-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.150444 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-scripts" (OuterVolumeSpecName: "scripts") pod "11d641dc-6c58-45f8-a6b8-a39a31c06331" (UID: "11d641dc-6c58-45f8-a6b8-a39a31c06331"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.152388 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d641dc-6c58-45f8-a6b8-a39a31c06331-kube-api-access-d4jb5" (OuterVolumeSpecName: "kube-api-access-d4jb5") pod "11d641dc-6c58-45f8-a6b8-a39a31c06331" (UID: "11d641dc-6c58-45f8-a6b8-a39a31c06331"). InnerVolumeSpecName "kube-api-access-d4jb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.175065 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "11d641dc-6c58-45f8-a6b8-a39a31c06331" (UID: "11d641dc-6c58-45f8-a6b8-a39a31c06331"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.235407 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11d641dc-6c58-45f8-a6b8-a39a31c06331" (UID: "11d641dc-6c58-45f8-a6b8-a39a31c06331"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.245963 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.245998 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.246009 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4jb5\" (UniqueName: \"kubernetes.io/projected/11d641dc-6c58-45f8-a6b8-a39a31c06331-kube-api-access-d4jb5\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.246019 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.253051 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-config-data" (OuterVolumeSpecName: "config-data") pod "11d641dc-6c58-45f8-a6b8-a39a31c06331" (UID: "11d641dc-6c58-45f8-a6b8-a39a31c06331"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.347416 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d641dc-6c58-45f8-a6b8-a39a31c06331-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.849038 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11d641dc-6c58-45f8-a6b8-a39a31c06331","Type":"ContainerDied","Data":"e015c55c0998a814e6d76c491a273218d0335d0a9dfbea3606ed4cdeb67bd175"} Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.849096 4780 scope.go:117] "RemoveContainer" containerID="c59fceb83d3601f51f3655960c467caecb3313de2e95c9cf6e0840b622491da9" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.850572 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.873393 4780 scope.go:117] "RemoveContainer" containerID="172ebb176734ee191e1caa2015c4ab71d57ad5faf21dac1ecb49bcb38bdfb66f" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.894806 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.907263 4780 scope.go:117] "RemoveContainer" containerID="a84d5876c036a81fd57cb2b7bd44a7ff01c21c900dd2f0d600a480c89ad8b457" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.915015 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.928001 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:35 crc kubenswrapper[4780]: E1205 07:08:35.929537 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="proxy-httpd" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.929660 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="proxy-httpd" Dec 05 07:08:35 crc kubenswrapper[4780]: E1205 07:08:35.929767 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="ceilometer-notification-agent" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.929841 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="ceilometer-notification-agent" Dec 05 07:08:35 crc kubenswrapper[4780]: E1205 07:08:35.929939 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="ceilometer-central-agent" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.930028 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="ceilometer-central-agent" Dec 05 07:08:35 crc kubenswrapper[4780]: E1205 07:08:35.930136 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="sg-core" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.930212 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="sg-core" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.930531 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="sg-core" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.930693 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="proxy-httpd" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.930780 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="ceilometer-central-agent" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.931903 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" containerName="ceilometer-notification-agent" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.935156 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.941771 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.943157 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.946319 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.949172 4780 scope.go:117] "RemoveContainer" containerID="a199a9f7025b4b11cb11164273274e5fdfb78326952d72579cc93462c59f176d" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.959611 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-log-httpd\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.959652 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.959741 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-run-httpd\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.959806 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.959827 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-config-data\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.959844 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xmr\" (UniqueName: \"kubernetes.io/projected/803f7a88-5efe-4682-baee-489b93bfdbc5-kube-api-access-49xmr\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:35 crc kubenswrapper[4780]: I1205 07:08:35.959862 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-scripts\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.061683 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.061757 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-config-data\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.061800 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xmr\" (UniqueName: \"kubernetes.io/projected/803f7a88-5efe-4682-baee-489b93bfdbc5-kube-api-access-49xmr\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.061841 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-scripts\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.061964 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-log-httpd\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.061991 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.062132 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-run-httpd\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.063062 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-run-httpd\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.063306 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-log-httpd\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.069259 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.071238 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-scripts\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.071575 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-config-data\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.072251 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.086787 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xmr\" (UniqueName: \"kubernetes.io/projected/803f7a88-5efe-4682-baee-489b93bfdbc5-kube-api-access-49xmr\") pod \"ceilometer-0\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.152537 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11d641dc-6c58-45f8-a6b8-a39a31c06331" path="/var/lib/kubelet/pods/11d641dc-6c58-45f8-a6b8-a39a31c06331/volumes" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.272838 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.758303 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:36 crc kubenswrapper[4780]: I1205 07:08:36.863043 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"803f7a88-5efe-4682-baee-489b93bfdbc5","Type":"ContainerStarted","Data":"af5fb6df720801d6d5ae24990d56ee73a73061a80918de30c5f93b0ca4c37f7d"} Dec 05 07:08:37 crc kubenswrapper[4780]: I1205 07:08:37.877990 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"803f7a88-5efe-4682-baee-489b93bfdbc5","Type":"ContainerStarted","Data":"1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b"} Dec 05 07:08:38 crc kubenswrapper[4780]: I1205 07:08:38.890238 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"803f7a88-5efe-4682-baee-489b93bfdbc5","Type":"ContainerStarted","Data":"c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34"} Dec 05 07:08:38 crc kubenswrapper[4780]: I1205 07:08:38.892055 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"803f7a88-5efe-4682-baee-489b93bfdbc5","Type":"ContainerStarted","Data":"6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258"} Dec 05 07:08:39 crc kubenswrapper[4780]: I1205 07:08:39.899079 4780 generic.go:334] "Generic (PLEG): container finished" podID="9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3" containerID="28ccece98c496d594b51ff9c26f60e3f17e74c62271711c3b2b7f2111e13a950" exitCode=0 Dec 05 07:08:39 crc kubenswrapper[4780]: I1205 07:08:39.899145 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jrvl" event={"ID":"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3","Type":"ContainerDied","Data":"28ccece98c496d594b51ff9c26f60e3f17e74c62271711c3b2b7f2111e13a950"} Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.251656 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.282530 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.302858 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.307337 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.308398 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.339421 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.339500 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.367023 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf2mt\" (UniqueName: \"kubernetes.io/projected/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-kube-api-access-tf2mt\") pod \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.367132 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-scripts\") pod \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.367214 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-config-data\") pod \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.367273 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-combined-ca-bundle\") pod \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\" (UID: \"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3\") " Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.373446 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-kube-api-access-tf2mt" (OuterVolumeSpecName: "kube-api-access-tf2mt") pod "9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3" (UID: "9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3"). InnerVolumeSpecName "kube-api-access-tf2mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.374047 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-scripts" (OuterVolumeSpecName: "scripts") pod "9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3" (UID: "9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.374150 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.375086 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.406590 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-config-data" (OuterVolumeSpecName: "config-data") pod "9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3" (UID: "9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.410320 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3" (UID: "9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.470003 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.470040 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.470052 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf2mt\" (UniqueName: \"kubernetes.io/projected/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-kube-api-access-tf2mt\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.470062 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.917576 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"803f7a88-5efe-4682-baee-489b93bfdbc5","Type":"ContainerStarted","Data":"b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4"} Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.918009 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.919949 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jrvl" event={"ID":"9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3","Type":"ContainerDied","Data":"760fdfc60ea7b6cf7e82b6ffb22925c117f922fb00f5ddd10ca6132145980b4b"} Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.920027 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760fdfc60ea7b6cf7e82b6ffb22925c117f922fb00f5ddd10ca6132145980b4b" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.920029 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jrvl" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.920550 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.920576 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.920591 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.920932 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:41 crc kubenswrapper[4780]: I1205 07:08:41.971463 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.455619247 podStartE2EDuration="6.971437354s" podCreationTimestamp="2025-12-05 07:08:35 +0000 UTC" firstStartedPulling="2025-12-05 07:08:36.757830207 +0000 UTC m=+1350.827346539" lastFinishedPulling="2025-12-05 07:08:41.273648314 +0000 UTC m=+1355.343164646" observedRunningTime="2025-12-05 07:08:41.961779693 +0000 UTC m=+1356.031296035" watchObservedRunningTime="2025-12-05 07:08:41.971437354 +0000 UTC m=+1356.040953686" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.052511 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 07:08:42 crc kubenswrapper[4780]: E1205 07:08:42.053266 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3" containerName="nova-cell0-conductor-db-sync" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.053365 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3" containerName="nova-cell0-conductor-db-sync" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.053678 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3" containerName="nova-cell0-conductor-db-sync" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.054425 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.057940 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.058120 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gq457" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.071684 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.184320 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.184453 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.184487 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9dv\" (UniqueName: \"kubernetes.io/projected/29f97591-4528-4ed0-918c-b6de191c452a-kube-api-access-qs9dv\") pod \"nova-cell0-conductor-0\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.286805 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.286908 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs9dv\" (UniqueName: \"kubernetes.io/projected/29f97591-4528-4ed0-918c-b6de191c452a-kube-api-access-qs9dv\") pod \"nova-cell0-conductor-0\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.287143 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.294581 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.295243 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.307541 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs9dv\" (UniqueName: \"kubernetes.io/projected/29f97591-4528-4ed0-918c-b6de191c452a-kube-api-access-qs9dv\") pod \"nova-cell0-conductor-0\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:42 crc kubenswrapper[4780]: I1205 07:08:42.373508 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:43 crc kubenswrapper[4780]: I1205 07:08:43.037965 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 07:08:43 crc kubenswrapper[4780]: W1205 07:08:43.040338 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29f97591_4528_4ed0_918c_b6de191c452a.slice/crio-6976e344600e3cc59e482468d2b47f25d2347e7649562a0cb8ec9cdcc8e64e28 WatchSource:0}: Error finding container 6976e344600e3cc59e482468d2b47f25d2347e7649562a0cb8ec9cdcc8e64e28: Status 404 returned error can't find the container with id 6976e344600e3cc59e482468d2b47f25d2347e7649562a0cb8ec9cdcc8e64e28 Dec 05 07:08:43 crc kubenswrapper[4780]: I1205 07:08:43.947205 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 07:08:43 crc kubenswrapper[4780]: I1205 07:08:43.948926 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 07:08:43 crc kubenswrapper[4780]: I1205 07:08:43.947205 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"29f97591-4528-4ed0-918c-b6de191c452a","Type":"ContainerStarted","Data":"d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a"} Dec 05 07:08:43 crc kubenswrapper[4780]: I1205 07:08:43.950097 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:43 crc kubenswrapper[4780]: I1205 07:08:43.950124 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"29f97591-4528-4ed0-918c-b6de191c452a","Type":"ContainerStarted","Data":"6976e344600e3cc59e482468d2b47f25d2347e7649562a0cb8ec9cdcc8e64e28"} Dec 05 07:08:43 crc kubenswrapper[4780]: I1205 07:08:43.964536 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.9645158980000001 podStartE2EDuration="1.964515898s" podCreationTimestamp="2025-12-05 07:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:43.960827859 +0000 UTC m=+1358.030344191" watchObservedRunningTime="2025-12-05 07:08:43.964515898 +0000 UTC m=+1358.034032230" Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.096437 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.096700 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="ceilometer-central-agent" containerID="cri-o://1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b" gracePeriod=30 Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.096766 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="sg-core" containerID="cri-o://c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34" gracePeriod=30 Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.096815 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="proxy-httpd" containerID="cri-o://b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4" gracePeriod=30 Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.096851 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="ceilometer-notification-agent" containerID="cri-o://6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258" gracePeriod=30 Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.361161 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.527074 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.528559 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.528864 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.714126 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.958740 4780 generic.go:334] "Generic (PLEG): container finished" podID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerID="b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4" exitCode=0 Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.958777 4780 generic.go:334] "Generic (PLEG): container finished" podID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerID="c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34" exitCode=2 Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.958786 4780 generic.go:334] "Generic (PLEG): container finished" podID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerID="6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258" exitCode=0 Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.958809 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"803f7a88-5efe-4682-baee-489b93bfdbc5","Type":"ContainerDied","Data":"b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4"} Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.958853 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"803f7a88-5efe-4682-baee-489b93bfdbc5","Type":"ContainerDied","Data":"c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34"} Dec 05 07:08:44 crc kubenswrapper[4780]: I1205 07:08:44.958866 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"803f7a88-5efe-4682-baee-489b93bfdbc5","Type":"ContainerDied","Data":"6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258"} Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.356692 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.499115 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-run-httpd\") pod \"803f7a88-5efe-4682-baee-489b93bfdbc5\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.499408 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-log-httpd\") pod \"803f7a88-5efe-4682-baee-489b93bfdbc5\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.499562 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-scripts\") pod \"803f7a88-5efe-4682-baee-489b93bfdbc5\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.499637 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-combined-ca-bundle\") pod \"803f7a88-5efe-4682-baee-489b93bfdbc5\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.499709 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-config-data\") pod \"803f7a88-5efe-4682-baee-489b93bfdbc5\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.499754 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "803f7a88-5efe-4682-baee-489b93bfdbc5" (UID: "803f7a88-5efe-4682-baee-489b93bfdbc5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.499795 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "803f7a88-5efe-4682-baee-489b93bfdbc5" (UID: "803f7a88-5efe-4682-baee-489b93bfdbc5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.499786 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-sg-core-conf-yaml\") pod \"803f7a88-5efe-4682-baee-489b93bfdbc5\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.500050 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49xmr\" (UniqueName: \"kubernetes.io/projected/803f7a88-5efe-4682-baee-489b93bfdbc5-kube-api-access-49xmr\") pod \"803f7a88-5efe-4682-baee-489b93bfdbc5\" (UID: \"803f7a88-5efe-4682-baee-489b93bfdbc5\") " Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.500896 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.500921 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/803f7a88-5efe-4682-baee-489b93bfdbc5-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.505542 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-scripts" (OuterVolumeSpecName: "scripts") pod "803f7a88-5efe-4682-baee-489b93bfdbc5" (UID: "803f7a88-5efe-4682-baee-489b93bfdbc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.507184 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/803f7a88-5efe-4682-baee-489b93bfdbc5-kube-api-access-49xmr" (OuterVolumeSpecName: "kube-api-access-49xmr") pod "803f7a88-5efe-4682-baee-489b93bfdbc5" (UID: "803f7a88-5efe-4682-baee-489b93bfdbc5"). InnerVolumeSpecName "kube-api-access-49xmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.532350 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "803f7a88-5efe-4682-baee-489b93bfdbc5" (UID: "803f7a88-5efe-4682-baee-489b93bfdbc5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.574393 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "803f7a88-5efe-4682-baee-489b93bfdbc5" (UID: "803f7a88-5efe-4682-baee-489b93bfdbc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.601205 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-config-data" (OuterVolumeSpecName: "config-data") pod "803f7a88-5efe-4682-baee-489b93bfdbc5" (UID: "803f7a88-5efe-4682-baee-489b93bfdbc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.602544 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.602602 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.602616 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.602646 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/803f7a88-5efe-4682-baee-489b93bfdbc5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:50 crc kubenswrapper[4780]: I1205 07:08:50.602660 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49xmr\" (UniqueName: \"kubernetes.io/projected/803f7a88-5efe-4682-baee-489b93bfdbc5-kube-api-access-49xmr\") on node \"crc\" DevicePath \"\"" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.014906 4780 generic.go:334] "Generic (PLEG): container finished" podID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerID="1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b" exitCode=0 Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.014948 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"803f7a88-5efe-4682-baee-489b93bfdbc5","Type":"ContainerDied","Data":"1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b"} Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.014961 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.014975 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"803f7a88-5efe-4682-baee-489b93bfdbc5","Type":"ContainerDied","Data":"af5fb6df720801d6d5ae24990d56ee73a73061a80918de30c5f93b0ca4c37f7d"} Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.014991 4780 scope.go:117] "RemoveContainer" containerID="b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.047950 4780 scope.go:117] "RemoveContainer" containerID="c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.063163 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.079162 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.093755 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:51 crc kubenswrapper[4780]: E1205 07:08:51.094365 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="ceilometer-central-agent" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.094388 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="ceilometer-central-agent" Dec 05 07:08:51 crc kubenswrapper[4780]: E1205 07:08:51.094436 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="proxy-httpd" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.094449 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="proxy-httpd" Dec 05 07:08:51 crc kubenswrapper[4780]: E1205 07:08:51.094476 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="ceilometer-notification-agent" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.094491 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="ceilometer-notification-agent" Dec 05 07:08:51 crc kubenswrapper[4780]: E1205 07:08:51.094510 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="sg-core" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.094522 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="sg-core" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.094843 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="ceilometer-central-agent" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.094865 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="ceilometer-notification-agent" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.094924 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="proxy-httpd" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.094950 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" containerName="sg-core" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.097898 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.114351 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.114366 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfvw\" (UniqueName: \"kubernetes.io/projected/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-kube-api-access-5dfvw\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.115057 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-config-data\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.115112 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-scripts\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.115124 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.115214 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-run-httpd\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.115250 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.115308 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.115598 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-log-httpd\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.121589 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.131494 4780 scope.go:117] "RemoveContainer" containerID="6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.152639 4780 scope.go:117] "RemoveContainer" containerID="1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.176252 4780 scope.go:117] "RemoveContainer" containerID="b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4" Dec 05 07:08:51 crc kubenswrapper[4780]: E1205 07:08:51.176768 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4\": container with ID starting with b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4 not found: ID does not exist" containerID="b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.176824 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4"} err="failed to get container status \"b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4\": rpc error: code = NotFound desc = could not find container \"b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4\": container with ID starting with b55e84e933251490a05cc25289844b28af073183520471899185c6859ad1f0d4 not found: ID does not exist" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.176862 4780 scope.go:117] "RemoveContainer" containerID="c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34" Dec 05 07:08:51 crc kubenswrapper[4780]: E1205 07:08:51.177253 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34\": container with ID starting with c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34 not found: ID does not exist" containerID="c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.177301 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34"} err="failed to get container status \"c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34\": rpc error: code = NotFound desc = could not find container \"c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34\": container with ID starting with c0e19d72e22d7a33a6988802fbbd2eba6bd356ed97443ee710d4ce6d7da15a34 not found: ID does not exist" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.177335 4780 scope.go:117] "RemoveContainer" containerID="6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258" Dec 05 07:08:51 crc kubenswrapper[4780]: E1205 07:08:51.177649 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258\": container with ID starting with 6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258 not found: ID does not exist" containerID="6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.177672 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258"} err="failed to get container status \"6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258\": rpc error: code = NotFound desc = could not find container \"6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258\": container with ID starting with 6752739957be846eb7fd7abb3b7e93d1d8c2f98fe3986dcc125c7707b18e2258 not found: ID does not exist" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.177689 4780 scope.go:117] "RemoveContainer" containerID="1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b" Dec 05 07:08:51 crc kubenswrapper[4780]: E1205 07:08:51.178062 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b\": container with ID starting with 1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b not found: ID does not exist" containerID="1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.178092 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b"} err="failed to get container status \"1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b\": rpc error: code = NotFound desc = could not find container \"1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b\": container with ID starting with 1adedfa699806f887ec46e37d6a013f9f53192f4b62ecfd37159072e5cc49a1b not found: ID does not exist" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.217487 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-config-data\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.217545 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-scripts\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.217616 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-run-httpd\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.217646 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.218294 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-run-httpd\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.218369 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.218464 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-log-httpd\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.218530 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dfvw\" (UniqueName: \"kubernetes.io/projected/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-kube-api-access-5dfvw\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.218951 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-log-httpd\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.221682 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.221969 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.222732 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-scripts\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.224228 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-config-data\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.249245 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dfvw\" (UniqueName: \"kubernetes.io/projected/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-kube-api-access-5dfvw\") pod \"ceilometer-0\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.433898 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.898527 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:08:51 crc kubenswrapper[4780]: W1205 07:08:51.904129 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89b81e7c_ad4f_44cc_86f0_f36eabb3c45d.slice/crio-cd7cb29de7dc7de4c49b7dbae4d680fbd823be8c80bccc561e3a2863f0858567 WatchSource:0}: Error finding container cd7cb29de7dc7de4c49b7dbae4d680fbd823be8c80bccc561e3a2863f0858567: Status 404 returned error can't find the container with id cd7cb29de7dc7de4c49b7dbae4d680fbd823be8c80bccc561e3a2863f0858567 Dec 05 07:08:51 crc kubenswrapper[4780]: I1205 07:08:51.906626 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 07:08:52 crc kubenswrapper[4780]: I1205 07:08:52.024466 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d","Type":"ContainerStarted","Data":"cd7cb29de7dc7de4c49b7dbae4d680fbd823be8c80bccc561e3a2863f0858567"} Dec 05 07:08:52 crc kubenswrapper[4780]: I1205 07:08:52.150381 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="803f7a88-5efe-4682-baee-489b93bfdbc5" path="/var/lib/kubelet/pods/803f7a88-5efe-4682-baee-489b93bfdbc5/volumes" Dec 05 07:08:52 crc kubenswrapper[4780]: I1205 07:08:52.409205 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 05 07:08:52 crc kubenswrapper[4780]: I1205 07:08:52.886578 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2mprp"] Dec 05 07:08:52 crc kubenswrapper[4780]: I1205 07:08:52.888010 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:52 crc kubenswrapper[4780]: I1205 07:08:52.891326 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 05 07:08:52 crc kubenswrapper[4780]: I1205 07:08:52.891587 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 05 07:08:52 crc kubenswrapper[4780]: I1205 07:08:52.898479 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2mprp"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.050813 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d","Type":"ContainerStarted","Data":"df5d8f584255ce7a41ab09763b7d0e65cff7f8332d83b675de804e93cddea0c7"} Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.057026 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-scripts\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.057115 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.057160 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2kcn\" (UniqueName: \"kubernetes.io/projected/30b6ab49-8909-4604-bbf0-1d5475a52cdb-kube-api-access-x2kcn\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.057224 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-config-data\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.077829 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.079867 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.088753 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.103571 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.148946 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.150405 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.153416 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.162784 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-config-data\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.162913 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-scripts\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.162984 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.163021 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2kcn\" (UniqueName: \"kubernetes.io/projected/30b6ab49-8909-4604-bbf0-1d5475a52cdb-kube-api-access-x2kcn\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.184586 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.204966 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.214606 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-config-data\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.215268 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-scripts\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.224694 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.226286 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.236634 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.241984 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.242545 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2kcn\" (UniqueName: \"kubernetes.io/projected/30b6ab49-8909-4604-bbf0-1d5475a52cdb-kube-api-access-x2kcn\") pod \"nova-cell0-cell-mapping-2mprp\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.265236 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-config-data\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.265288 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.265330 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.265369 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff7025d7-6055-4219-ae31-8b601082073d-logs\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.265462 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-config-data\") pod \"nova-scheduler-0\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.265482 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v998t\" (UniqueName: \"kubernetes.io/projected/890c803f-aa78-4d31-a88a-eabd11580461-kube-api-access-v998t\") pod \"nova-scheduler-0\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.265506 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjb8m\" (UniqueName: \"kubernetes.io/projected/ff7025d7-6055-4219-ae31-8b601082073d-kube-api-access-cjb8m\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.356190 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-w54fp"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.357738 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.370986 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff7025d7-6055-4219-ae31-8b601082073d-logs\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.371060 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-config-data\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.371138 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5bp\" (UniqueName: \"kubernetes.io/projected/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-kube-api-access-zz5bp\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.371190 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.371360 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-config-data\") pod \"nova-scheduler-0\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.371400 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v998t\" (UniqueName: \"kubernetes.io/projected/890c803f-aa78-4d31-a88a-eabd11580461-kube-api-access-v998t\") pod \"nova-scheduler-0\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.371489 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjb8m\" (UniqueName: \"kubernetes.io/projected/ff7025d7-6055-4219-ae31-8b601082073d-kube-api-access-cjb8m\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.371609 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-logs\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.371689 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-config-data\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.371738 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.371784 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.372146 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff7025d7-6055-4219-ae31-8b601082073d-logs\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.385549 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.387130 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.391208 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.394025 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.394532 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.395033 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-config-data\") pod \"nova-scheduler-0\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.411530 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-config-data\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.414007 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v998t\" (UniqueName: \"kubernetes.io/projected/890c803f-aa78-4d31-a88a-eabd11580461-kube-api-access-v998t\") pod \"nova-scheduler-0\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.428305 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjb8m\" (UniqueName: \"kubernetes.io/projected/ff7025d7-6055-4219-ae31-8b601082073d-kube-api-access-cjb8m\") pod \"nova-api-0\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.431043 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-w54fp"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.450661 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476313 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-config\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476374 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476432 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-logs\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476465 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476520 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-svc\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476542 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gk98\" (UniqueName: \"kubernetes.io/projected/8fcbc6a3-d079-4d42-9761-572c3068dbb8-kube-api-access-5gk98\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476576 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476595 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476628 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-config-data\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476666 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlhl2\" (UniqueName: \"kubernetes.io/projected/6b842be9-82f0-42b1-aee2-97452b8cda61-kube-api-access-vlhl2\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476691 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz5bp\" (UniqueName: \"kubernetes.io/projected/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-kube-api-access-zz5bp\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476707 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.476725 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.478152 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-logs\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.484994 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.487040 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-config-data\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.510054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz5bp\" (UniqueName: \"kubernetes.io/projected/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-kube-api-access-zz5bp\") pod \"nova-metadata-0\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.513595 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.578889 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.579155 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-svc\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.579213 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gk98\" (UniqueName: \"kubernetes.io/projected/8fcbc6a3-d079-4d42-9761-572c3068dbb8-kube-api-access-5gk98\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.579285 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.579318 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.579456 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlhl2\" (UniqueName: \"kubernetes.io/projected/6b842be9-82f0-42b1-aee2-97452b8cda61-kube-api-access-vlhl2\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.579504 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.579535 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.579596 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-config\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.581213 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-svc\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.581792 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.582005 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.582180 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.582316 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-config\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.583178 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.584904 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.599216 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gk98\" (UniqueName: \"kubernetes.io/projected/8fcbc6a3-d079-4d42-9761-572c3068dbb8-kube-api-access-5gk98\") pod \"dnsmasq-dns-7bd87576bf-w54fp\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.599267 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlhl2\" (UniqueName: \"kubernetes.io/projected/6b842be9-82f0-42b1-aee2-97452b8cda61-kube-api-access-vlhl2\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.610199 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.639680 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.680458 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.700535 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:08:53 crc kubenswrapper[4780]: I1205 07:08:53.793980 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.057342 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2mprp"] Dec 05 07:08:54 crc kubenswrapper[4780]: W1205 07:08:54.077816 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30b6ab49_8909_4604_bbf0_1d5475a52cdb.slice/crio-1b163c5a3113f23921558e522030da9bbbb7e498b26071115484eba4f3dc1bff WatchSource:0}: Error finding container 1b163c5a3113f23921558e522030da9bbbb7e498b26071115484eba4f3dc1bff: Status 404 returned error can't find the container with id 1b163c5a3113f23921558e522030da9bbbb7e498b26071115484eba4f3dc1bff Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.122665 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tdvq2"] Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.124440 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.130754 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.136708 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tdvq2"] Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.137227 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.219041 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.221590 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbhbc\" (UniqueName: \"kubernetes.io/projected/56db168c-9500-4a17-9cd0-1bcfeeee167b-kube-api-access-xbhbc\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.221642 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.221696 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-config-data\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.221727 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-scripts\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.323318 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-config-data\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.323586 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-scripts\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.323726 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbhbc\" (UniqueName: \"kubernetes.io/projected/56db168c-9500-4a17-9cd0-1bcfeeee167b-kube-api-access-xbhbc\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.323749 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.341215 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-config-data\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.342470 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.355538 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbhbc\" (UniqueName: \"kubernetes.io/projected/56db168c-9500-4a17-9cd0-1bcfeeee167b-kube-api-access-xbhbc\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.355740 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-scripts\") pod \"nova-cell1-conductor-db-sync-tdvq2\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.405996 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.439207 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.474066 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.542289 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-w54fp"] Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.684851 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:08:54 crc kubenswrapper[4780]: I1205 07:08:54.990742 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tdvq2"] Dec 05 07:08:54 crc kubenswrapper[4780]: W1205 07:08:54.994509 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56db168c_9500_4a17_9cd0_1bcfeeee167b.slice/crio-7762971f9869389a07d4757075c261b103339fa1d0c4cbfb5aa17ec0ac015541 WatchSource:0}: Error finding container 7762971f9869389a07d4757075c261b103339fa1d0c4cbfb5aa17ec0ac015541: Status 404 returned error can't find the container with id 7762971f9869389a07d4757075c261b103339fa1d0c4cbfb5aa17ec0ac015541 Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.089188 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2mprp" event={"ID":"30b6ab49-8909-4604-bbf0-1d5475a52cdb","Type":"ContainerStarted","Data":"33698a038aab8dc0dfa876c85ed821b198c81dd1835f1e33bee13fd9c143b083"} Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.089229 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2mprp" event={"ID":"30b6ab49-8909-4604-bbf0-1d5475a52cdb","Type":"ContainerStarted","Data":"1b163c5a3113f23921558e522030da9bbbb7e498b26071115484eba4f3dc1bff"} Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.114100 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d","Type":"ContainerStarted","Data":"e319c16404d228e3b9f27ff16585859030e0d4eeae70292340fcfe04695d2e07"} Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.114160 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d","Type":"ContainerStarted","Data":"9aa68986804bdba27dfae6148f0e504ceca1ead026dae332ef3b845cada8ded2"} Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.119241 4780 generic.go:334] "Generic (PLEG): container finished" podID="8fcbc6a3-d079-4d42-9761-572c3068dbb8" containerID="508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5" exitCode=0 Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.119314 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" event={"ID":"8fcbc6a3-d079-4d42-9761-572c3068dbb8","Type":"ContainerDied","Data":"508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5"} Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.119342 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" event={"ID":"8fcbc6a3-d079-4d42-9761-572c3068dbb8","Type":"ContainerStarted","Data":"a7b96a4ce65ce03eb654dc202a60896368c9f6f95d0cb45fd9cfeb0d496cd3a6"} Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.126976 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b842be9-82f0-42b1-aee2-97452b8cda61","Type":"ContainerStarted","Data":"d1adfc93afcde0b563f60c5300defa5bb1450a1aa9ef48e6efeb74b20a19a75c"} Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.128421 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2815b480-60d3-4d1d-af40-b7bfb6d6b57f","Type":"ContainerStarted","Data":"71aca912ba4e947d67692d30fb79844a4560fb9ce59cc5d3e257a1ae2b522f16"} Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.129345 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tdvq2" event={"ID":"56db168c-9500-4a17-9cd0-1bcfeeee167b","Type":"ContainerStarted","Data":"7762971f9869389a07d4757075c261b103339fa1d0c4cbfb5aa17ec0ac015541"} Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.129679 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2mprp" podStartSLOduration=3.129653359 podStartE2EDuration="3.129653359s" podCreationTimestamp="2025-12-05 07:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:55.111025206 +0000 UTC m=+1369.180541558" watchObservedRunningTime="2025-12-05 07:08:55.129653359 +0000 UTC m=+1369.199169691" Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.130424 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7025d7-6055-4219-ae31-8b601082073d","Type":"ContainerStarted","Data":"a20de16eee26524dc42e6f9b9706a60ac94ffb74d80dcdd0c422c45d835e94a5"} Dec 05 07:08:55 crc kubenswrapper[4780]: I1205 07:08:55.132467 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"890c803f-aa78-4d31-a88a-eabd11580461","Type":"ContainerStarted","Data":"a8ab39cc5b2f8feb52ed95ab2fbb9c21a3e0a51e98f71ab4089a99cbf2f38fd5"} Dec 05 07:08:56 crc kubenswrapper[4780]: I1205 07:08:56.157241 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" event={"ID":"8fcbc6a3-d079-4d42-9761-572c3068dbb8","Type":"ContainerStarted","Data":"d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97"} Dec 05 07:08:56 crc kubenswrapper[4780]: I1205 07:08:56.159628 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:08:56 crc kubenswrapper[4780]: I1205 07:08:56.167582 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tdvq2" event={"ID":"56db168c-9500-4a17-9cd0-1bcfeeee167b","Type":"ContainerStarted","Data":"3434bde80ccbbb622b689d6302932b3de48696d0bfb3f6dd895bbc7bdccbf874"} Dec 05 07:08:56 crc kubenswrapper[4780]: I1205 07:08:56.320637 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-tdvq2" podStartSLOduration=2.32062076 podStartE2EDuration="2.32062076s" podCreationTimestamp="2025-12-05 07:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:56.291393231 +0000 UTC m=+1370.360909563" watchObservedRunningTime="2025-12-05 07:08:56.32062076 +0000 UTC m=+1370.390137092" Dec 05 07:08:56 crc kubenswrapper[4780]: I1205 07:08:56.329065 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" podStartSLOduration=3.329048417 podStartE2EDuration="3.329048417s" podCreationTimestamp="2025-12-05 07:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:08:56.309550261 +0000 UTC m=+1370.379066593" watchObservedRunningTime="2025-12-05 07:08:56.329048417 +0000 UTC m=+1370.398564739" Dec 05 07:08:56 crc kubenswrapper[4780]: I1205 07:08:56.823559 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:08:56 crc kubenswrapper[4780]: I1205 07:08:56.835609 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.203033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"890c803f-aa78-4d31-a88a-eabd11580461","Type":"ContainerStarted","Data":"d5c8d91a402c1d07e080bbe797caccca3caf869137330315419ea7289bc4522d"} Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.207299 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d","Type":"ContainerStarted","Data":"a4a74b22f43ec1f7bb567a26826d7c7ec86726652e7641ef686b6f50fbe33c68"} Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.208283 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.210297 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b842be9-82f0-42b1-aee2-97452b8cda61","Type":"ContainerStarted","Data":"69d9fb1b4be7a54df2d658eec2e895ce95479d6848210731e3f484c81badd3c6"} Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.210408 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6b842be9-82f0-42b1-aee2-97452b8cda61" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://69d9fb1b4be7a54df2d658eec2e895ce95479d6848210731e3f484c81badd3c6" gracePeriod=30 Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.216655 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2815b480-60d3-4d1d-af40-b7bfb6d6b57f","Type":"ContainerStarted","Data":"57f0d56f57dba5a76b5ae9c524ad14669a4cf2640f360dae3d76c67b3ea0971c"} Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.216703 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2815b480-60d3-4d1d-af40-b7bfb6d6b57f","Type":"ContainerStarted","Data":"5b1b905a6f84e56b61f4ec8b3c8c3a526b1d095c6d113428524ee109306574f3"} Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.216819 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" containerName="nova-metadata-log" containerID="cri-o://5b1b905a6f84e56b61f4ec8b3c8c3a526b1d095c6d113428524ee109306574f3" gracePeriod=30 Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.216929 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" containerName="nova-metadata-metadata" containerID="cri-o://57f0d56f57dba5a76b5ae9c524ad14669a4cf2640f360dae3d76c67b3ea0971c" gracePeriod=30 Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.219070 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7025d7-6055-4219-ae31-8b601082073d","Type":"ContainerStarted","Data":"dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35"} Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.219107 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7025d7-6055-4219-ae31-8b601082073d","Type":"ContainerStarted","Data":"50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275"} Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.228943 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.118288781 podStartE2EDuration="6.228921138s" podCreationTimestamp="2025-12-05 07:08:53 +0000 UTC" firstStartedPulling="2025-12-05 07:08:54.248561065 +0000 UTC m=+1368.318077397" lastFinishedPulling="2025-12-05 07:08:58.359193422 +0000 UTC m=+1372.428709754" observedRunningTime="2025-12-05 07:08:59.226136683 +0000 UTC m=+1373.295653015" watchObservedRunningTime="2025-12-05 07:08:59.228921138 +0000 UTC m=+1373.298437460" Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.248742 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.3154027790000002 podStartE2EDuration="6.248724093s" podCreationTimestamp="2025-12-05 07:08:53 +0000 UTC" firstStartedPulling="2025-12-05 07:08:54.426834004 +0000 UTC m=+1368.496350336" lastFinishedPulling="2025-12-05 07:08:58.360155318 +0000 UTC m=+1372.429671650" observedRunningTime="2025-12-05 07:08:59.24491432 +0000 UTC m=+1373.314430652" watchObservedRunningTime="2025-12-05 07:08:59.248724093 +0000 UTC m=+1373.318240425" Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.274344 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.66654971 podStartE2EDuration="6.274317053s" podCreationTimestamp="2025-12-05 07:08:53 +0000 UTC" firstStartedPulling="2025-12-05 07:08:54.711214054 +0000 UTC m=+1368.780730386" lastFinishedPulling="2025-12-05 07:08:58.318981397 +0000 UTC m=+1372.388497729" observedRunningTime="2025-12-05 07:08:59.267693444 +0000 UTC m=+1373.337209796" watchObservedRunningTime="2025-12-05 07:08:59.274317053 +0000 UTC m=+1373.343833385" Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.299447 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.845841254 podStartE2EDuration="8.299425221s" podCreationTimestamp="2025-12-05 07:08:51 +0000 UTC" firstStartedPulling="2025-12-05 07:08:51.906414407 +0000 UTC m=+1365.975930739" lastFinishedPulling="2025-12-05 07:08:58.359998384 +0000 UTC m=+1372.429514706" observedRunningTime="2025-12-05 07:08:59.292852753 +0000 UTC m=+1373.362369085" watchObservedRunningTime="2025-12-05 07:08:59.299425221 +0000 UTC m=+1373.368941553" Dec 05 07:08:59 crc kubenswrapper[4780]: I1205 07:08:59.335434 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.421054478 podStartE2EDuration="6.33541359s" podCreationTimestamp="2025-12-05 07:08:53 +0000 UTC" firstStartedPulling="2025-12-05 07:08:54.444913662 +0000 UTC m=+1368.514429994" lastFinishedPulling="2025-12-05 07:08:58.359272774 +0000 UTC m=+1372.428789106" observedRunningTime="2025-12-05 07:08:59.331757862 +0000 UTC m=+1373.401274194" watchObservedRunningTime="2025-12-05 07:08:59.33541359 +0000 UTC m=+1373.404929922" Dec 05 07:09:00 crc kubenswrapper[4780]: I1205 07:09:00.229792 4780 generic.go:334] "Generic (PLEG): container finished" podID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" containerID="5b1b905a6f84e56b61f4ec8b3c8c3a526b1d095c6d113428524ee109306574f3" exitCode=143 Dec 05 07:09:00 crc kubenswrapper[4780]: I1205 07:09:00.229862 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2815b480-60d3-4d1d-af40-b7bfb6d6b57f","Type":"ContainerDied","Data":"5b1b905a6f84e56b61f4ec8b3c8c3a526b1d095c6d113428524ee109306574f3"} Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.261599 4780 generic.go:334] "Generic (PLEG): container finished" podID="30b6ab49-8909-4604-bbf0-1d5475a52cdb" containerID="33698a038aab8dc0dfa876c85ed821b198c81dd1835f1e33bee13fd9c143b083" exitCode=0 Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.261670 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2mprp" event={"ID":"30b6ab49-8909-4604-bbf0-1d5475a52cdb","Type":"ContainerDied","Data":"33698a038aab8dc0dfa876c85ed821b198c81dd1835f1e33bee13fd9c143b083"} Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.263606 4780 generic.go:334] "Generic (PLEG): container finished" podID="56db168c-9500-4a17-9cd0-1bcfeeee167b" containerID="3434bde80ccbbb622b689d6302932b3de48696d0bfb3f6dd895bbc7bdccbf874" exitCode=0 Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.263652 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tdvq2" event={"ID":"56db168c-9500-4a17-9cd0-1bcfeeee167b","Type":"ContainerDied","Data":"3434bde80ccbbb622b689d6302932b3de48696d0bfb3f6dd895bbc7bdccbf874"} Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.611235 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.611298 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.636775 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.640920 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.640962 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.683068 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.702122 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.702175 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.762058 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-svvm6"] Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.762294 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" podUID="3d187a7e-2376-4b39-84b2-73ecfa0b15bf" containerName="dnsmasq-dns" containerID="cri-o://a2dee3018e38265f1fab81663bc435e93805201a6800848b3cb8d8282d2f7c3a" gracePeriod=10 Dec 05 07:09:03 crc kubenswrapper[4780]: I1205 07:09:03.797308 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.274000 4780 generic.go:334] "Generic (PLEG): container finished" podID="3d187a7e-2376-4b39-84b2-73ecfa0b15bf" containerID="a2dee3018e38265f1fab81663bc435e93805201a6800848b3cb8d8282d2f7c3a" exitCode=0 Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.274587 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" event={"ID":"3d187a7e-2376-4b39-84b2-73ecfa0b15bf","Type":"ContainerDied","Data":"a2dee3018e38265f1fab81663bc435e93805201a6800848b3cb8d8282d2f7c3a"} Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.274655 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" event={"ID":"3d187a7e-2376-4b39-84b2-73ecfa0b15bf","Type":"ContainerDied","Data":"d19289d49d355c5d362ab535e485b508e79bc9f75d724cf5ba09f5875e705d19"} Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.274670 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d19289d49d355c5d362ab535e485b508e79bc9f75d724cf5ba09f5875e705d19" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.288942 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.324282 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.476976 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-swift-storage-0\") pod \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.477060 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc5zp\" (UniqueName: \"kubernetes.io/projected/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-kube-api-access-fc5zp\") pod \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.477148 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-nb\") pod \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.477170 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-config\") pod \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.477205 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-sb\") pod \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.477273 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-svc\") pod \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\" (UID: \"3d187a7e-2376-4b39-84b2-73ecfa0b15bf\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.528253 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-kube-api-access-fc5zp" (OuterVolumeSpecName: "kube-api-access-fc5zp") pod "3d187a7e-2376-4b39-84b2-73ecfa0b15bf" (UID: "3d187a7e-2376-4b39-84b2-73ecfa0b15bf"). InnerVolumeSpecName "kube-api-access-fc5zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.560872 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d187a7e-2376-4b39-84b2-73ecfa0b15bf" (UID: "3d187a7e-2376-4b39-84b2-73ecfa0b15bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.579721 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc5zp\" (UniqueName: \"kubernetes.io/projected/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-kube-api-access-fc5zp\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.580147 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.581565 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d187a7e-2376-4b39-84b2-73ecfa0b15bf" (UID: "3d187a7e-2376-4b39-84b2-73ecfa0b15bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.588332 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-config" (OuterVolumeSpecName: "config") pod "3d187a7e-2376-4b39-84b2-73ecfa0b15bf" (UID: "3d187a7e-2376-4b39-84b2-73ecfa0b15bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.608427 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3d187a7e-2376-4b39-84b2-73ecfa0b15bf" (UID: "3d187a7e-2376-4b39-84b2-73ecfa0b15bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.647431 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d187a7e-2376-4b39-84b2-73ecfa0b15bf" (UID: "3d187a7e-2376-4b39-84b2-73ecfa0b15bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.658159 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.682432 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.682471 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.682486 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.682496 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d187a7e-2376-4b39-84b2-73ecfa0b15bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.729104 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.784112 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff7025d7-6055-4219-ae31-8b601082073d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.784412 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ff7025d7-6055-4219-ae31-8b601082073d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.785447 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-scripts\") pod \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.785507 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2kcn\" (UniqueName: \"kubernetes.io/projected/30b6ab49-8909-4604-bbf0-1d5475a52cdb-kube-api-access-x2kcn\") pod \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.785656 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-config-data\") pod \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.785682 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-combined-ca-bundle\") pod \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\" (UID: \"30b6ab49-8909-4604-bbf0-1d5475a52cdb\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.791869 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b6ab49-8909-4604-bbf0-1d5475a52cdb-kube-api-access-x2kcn" (OuterVolumeSpecName: "kube-api-access-x2kcn") pod "30b6ab49-8909-4604-bbf0-1d5475a52cdb" (UID: "30b6ab49-8909-4604-bbf0-1d5475a52cdb"). InnerVolumeSpecName "kube-api-access-x2kcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.795342 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-scripts" (OuterVolumeSpecName: "scripts") pod "30b6ab49-8909-4604-bbf0-1d5475a52cdb" (UID: "30b6ab49-8909-4604-bbf0-1d5475a52cdb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.817817 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30b6ab49-8909-4604-bbf0-1d5475a52cdb" (UID: "30b6ab49-8909-4604-bbf0-1d5475a52cdb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.822215 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-config-data" (OuterVolumeSpecName: "config-data") pod "30b6ab49-8909-4604-bbf0-1d5475a52cdb" (UID: "30b6ab49-8909-4604-bbf0-1d5475a52cdb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.898684 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbhbc\" (UniqueName: \"kubernetes.io/projected/56db168c-9500-4a17-9cd0-1bcfeeee167b-kube-api-access-xbhbc\") pod \"56db168c-9500-4a17-9cd0-1bcfeeee167b\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.898973 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-combined-ca-bundle\") pod \"56db168c-9500-4a17-9cd0-1bcfeeee167b\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.899010 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-scripts\") pod \"56db168c-9500-4a17-9cd0-1bcfeeee167b\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.899124 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-config-data\") pod \"56db168c-9500-4a17-9cd0-1bcfeeee167b\" (UID: \"56db168c-9500-4a17-9cd0-1bcfeeee167b\") " Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.899806 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.899823 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.899832 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30b6ab49-8909-4604-bbf0-1d5475a52cdb-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.899841 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2kcn\" (UniqueName: \"kubernetes.io/projected/30b6ab49-8909-4604-bbf0-1d5475a52cdb-kube-api-access-x2kcn\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.902545 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-scripts" (OuterVolumeSpecName: "scripts") pod "56db168c-9500-4a17-9cd0-1bcfeeee167b" (UID: "56db168c-9500-4a17-9cd0-1bcfeeee167b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.902604 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56db168c-9500-4a17-9cd0-1bcfeeee167b-kube-api-access-xbhbc" (OuterVolumeSpecName: "kube-api-access-xbhbc") pod "56db168c-9500-4a17-9cd0-1bcfeeee167b" (UID: "56db168c-9500-4a17-9cd0-1bcfeeee167b"). InnerVolumeSpecName "kube-api-access-xbhbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.928347 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56db168c-9500-4a17-9cd0-1bcfeeee167b" (UID: "56db168c-9500-4a17-9cd0-1bcfeeee167b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:04 crc kubenswrapper[4780]: I1205 07:09:04.931997 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-config-data" (OuterVolumeSpecName: "config-data") pod "56db168c-9500-4a17-9cd0-1bcfeeee167b" (UID: "56db168c-9500-4a17-9cd0-1bcfeeee167b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.001110 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbhbc\" (UniqueName: \"kubernetes.io/projected/56db168c-9500-4a17-9cd0-1bcfeeee167b-kube-api-access-xbhbc\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.001154 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.001165 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.001173 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56db168c-9500-4a17-9cd0-1bcfeeee167b-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.287531 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tdvq2" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.287530 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tdvq2" event={"ID":"56db168c-9500-4a17-9cd0-1bcfeeee167b","Type":"ContainerDied","Data":"7762971f9869389a07d4757075c261b103339fa1d0c4cbfb5aa17ec0ac015541"} Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.287632 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7762971f9869389a07d4757075c261b103339fa1d0c4cbfb5aa17ec0ac015541" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.289844 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2mprp" event={"ID":"30b6ab49-8909-4604-bbf0-1d5475a52cdb","Type":"ContainerDied","Data":"1b163c5a3113f23921558e522030da9bbbb7e498b26071115484eba4f3dc1bff"} Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.289913 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d54d44c7-svvm6" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.289939 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2mprp" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.289925 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b163c5a3113f23921558e522030da9bbbb7e498b26071115484eba4f3dc1bff" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.383090 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 07:09:05 crc kubenswrapper[4780]: E1205 07:09:05.384452 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d187a7e-2376-4b39-84b2-73ecfa0b15bf" containerName="dnsmasq-dns" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.384642 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d187a7e-2376-4b39-84b2-73ecfa0b15bf" containerName="dnsmasq-dns" Dec 05 07:09:05 crc kubenswrapper[4780]: E1205 07:09:05.384662 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56db168c-9500-4a17-9cd0-1bcfeeee167b" containerName="nova-cell1-conductor-db-sync" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.384669 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="56db168c-9500-4a17-9cd0-1bcfeeee167b" containerName="nova-cell1-conductor-db-sync" Dec 05 07:09:05 crc kubenswrapper[4780]: E1205 07:09:05.384678 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b6ab49-8909-4604-bbf0-1d5475a52cdb" containerName="nova-manage" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.384684 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b6ab49-8909-4604-bbf0-1d5475a52cdb" containerName="nova-manage" Dec 05 07:09:05 crc kubenswrapper[4780]: E1205 07:09:05.384705 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d187a7e-2376-4b39-84b2-73ecfa0b15bf" containerName="init" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.384710 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d187a7e-2376-4b39-84b2-73ecfa0b15bf" containerName="init" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.384907 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b6ab49-8909-4604-bbf0-1d5475a52cdb" containerName="nova-manage" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.384926 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="56db168c-9500-4a17-9cd0-1bcfeeee167b" containerName="nova-cell1-conductor-db-sync" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.384957 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d187a7e-2376-4b39-84b2-73ecfa0b15bf" containerName="dnsmasq-dns" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.385996 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.391798 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.402828 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.410495 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.410531 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.410570 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gt48\" (UniqueName: \"kubernetes.io/projected/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-kube-api-access-9gt48\") pod \"nova-cell1-conductor-0\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.427771 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-svvm6"] Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.441940 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56d54d44c7-svvm6"] Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.512008 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.512057 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.512111 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gt48\" (UniqueName: \"kubernetes.io/projected/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-kube-api-access-9gt48\") pod \"nova-cell1-conductor-0\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.515945 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.516429 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff7025d7-6055-4219-ae31-8b601082073d" containerName="nova-api-log" containerID="cri-o://50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275" gracePeriod=30 Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.516960 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ff7025d7-6055-4219-ae31-8b601082073d" containerName="nova-api-api" containerID="cri-o://dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35" gracePeriod=30 Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.521676 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.531387 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.537231 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.540151 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gt48\" (UniqueName: \"kubernetes.io/projected/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-kube-api-access-9gt48\") pod \"nova-cell1-conductor-0\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:05 crc kubenswrapper[4780]: I1205 07:09:05.729859 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:06 crc kubenswrapper[4780]: I1205 07:09:06.155531 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d187a7e-2376-4b39-84b2-73ecfa0b15bf" path="/var/lib/kubelet/pods/3d187a7e-2376-4b39-84b2-73ecfa0b15bf/volumes" Dec 05 07:09:06 crc kubenswrapper[4780]: W1205 07:09:06.289142 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfda6b602_0a2c_4047_94ba_f8cdf4bbcf0c.slice/crio-8a66613de910b1fa6973c4549f30bc0960c3659829952e1502bbcddbd917cefb WatchSource:0}: Error finding container 8a66613de910b1fa6973c4549f30bc0960c3659829952e1502bbcddbd917cefb: Status 404 returned error can't find the container with id 8a66613de910b1fa6973c4549f30bc0960c3659829952e1502bbcddbd917cefb Dec 05 07:09:06 crc kubenswrapper[4780]: I1205 07:09:06.294448 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 07:09:06 crc kubenswrapper[4780]: I1205 07:09:06.304397 4780 generic.go:334] "Generic (PLEG): container finished" podID="ff7025d7-6055-4219-ae31-8b601082073d" containerID="50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275" exitCode=143 Dec 05 07:09:06 crc kubenswrapper[4780]: I1205 07:09:06.304436 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7025d7-6055-4219-ae31-8b601082073d","Type":"ContainerDied","Data":"50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275"} Dec 05 07:09:06 crc kubenswrapper[4780]: I1205 07:09:06.306086 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c","Type":"ContainerStarted","Data":"8a66613de910b1fa6973c4549f30bc0960c3659829952e1502bbcddbd917cefb"} Dec 05 07:09:06 crc kubenswrapper[4780]: I1205 07:09:06.306367 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="890c803f-aa78-4d31-a88a-eabd11580461" containerName="nova-scheduler-scheduler" containerID="cri-o://d5c8d91a402c1d07e080bbe797caccca3caf869137330315419ea7289bc4522d" gracePeriod=30 Dec 05 07:09:07 crc kubenswrapper[4780]: I1205 07:09:07.320543 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c","Type":"ContainerStarted","Data":"6d62e69774c5587c8e04a48087fe8984cb21a4165f3b57401aaae1dcddc7f33a"} Dec 05 07:09:07 crc kubenswrapper[4780]: I1205 07:09:07.321726 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:07 crc kubenswrapper[4780]: I1205 07:09:07.347132 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.347112601 podStartE2EDuration="2.347112601s" podCreationTimestamp="2025-12-05 07:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:07.340196755 +0000 UTC m=+1381.409713087" watchObservedRunningTime="2025-12-05 07:09:07.347112601 +0000 UTC m=+1381.416628933" Dec 05 07:09:08 crc kubenswrapper[4780]: E1205 07:09:08.613632 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5c8d91a402c1d07e080bbe797caccca3caf869137330315419ea7289bc4522d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 07:09:08 crc kubenswrapper[4780]: E1205 07:09:08.615169 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5c8d91a402c1d07e080bbe797caccca3caf869137330315419ea7289bc4522d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 07:09:08 crc kubenswrapper[4780]: E1205 07:09:08.616614 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5c8d91a402c1d07e080bbe797caccca3caf869137330315419ea7289bc4522d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 07:09:08 crc kubenswrapper[4780]: E1205 07:09:08.616690 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="890c803f-aa78-4d31-a88a-eabd11580461" containerName="nova-scheduler-scheduler" Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.348709 4780 generic.go:334] "Generic (PLEG): container finished" podID="890c803f-aa78-4d31-a88a-eabd11580461" containerID="d5c8d91a402c1d07e080bbe797caccca3caf869137330315419ea7289bc4522d" exitCode=0 Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.348799 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"890c803f-aa78-4d31-a88a-eabd11580461","Type":"ContainerDied","Data":"d5c8d91a402c1d07e080bbe797caccca3caf869137330315419ea7289bc4522d"} Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.349085 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"890c803f-aa78-4d31-a88a-eabd11580461","Type":"ContainerDied","Data":"a8ab39cc5b2f8feb52ed95ab2fbb9c21a3e0a51e98f71ab4089a99cbf2f38fd5"} Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.349102 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ab39cc5b2f8feb52ed95ab2fbb9c21a3e0a51e98f71ab4089a99cbf2f38fd5" Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.374086 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.423567 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v998t\" (UniqueName: \"kubernetes.io/projected/890c803f-aa78-4d31-a88a-eabd11580461-kube-api-access-v998t\") pod \"890c803f-aa78-4d31-a88a-eabd11580461\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.423705 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-config-data\") pod \"890c803f-aa78-4d31-a88a-eabd11580461\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.423801 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-combined-ca-bundle\") pod \"890c803f-aa78-4d31-a88a-eabd11580461\" (UID: \"890c803f-aa78-4d31-a88a-eabd11580461\") " Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.432571 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/890c803f-aa78-4d31-a88a-eabd11580461-kube-api-access-v998t" (OuterVolumeSpecName: "kube-api-access-v998t") pod "890c803f-aa78-4d31-a88a-eabd11580461" (UID: "890c803f-aa78-4d31-a88a-eabd11580461"). InnerVolumeSpecName "kube-api-access-v998t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.459661 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-config-data" (OuterVolumeSpecName: "config-data") pod "890c803f-aa78-4d31-a88a-eabd11580461" (UID: "890c803f-aa78-4d31-a88a-eabd11580461"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.467770 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "890c803f-aa78-4d31-a88a-eabd11580461" (UID: "890c803f-aa78-4d31-a88a-eabd11580461"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.527180 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v998t\" (UniqueName: \"kubernetes.io/projected/890c803f-aa78-4d31-a88a-eabd11580461-kube-api-access-v998t\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.527478 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:10 crc kubenswrapper[4780]: I1205 07:09:10.527491 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890c803f-aa78-4d31-a88a-eabd11580461-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.246328 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.349799 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjb8m\" (UniqueName: \"kubernetes.io/projected/ff7025d7-6055-4219-ae31-8b601082073d-kube-api-access-cjb8m\") pod \"ff7025d7-6055-4219-ae31-8b601082073d\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.349966 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff7025d7-6055-4219-ae31-8b601082073d-logs\") pod \"ff7025d7-6055-4219-ae31-8b601082073d\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.350091 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-combined-ca-bundle\") pod \"ff7025d7-6055-4219-ae31-8b601082073d\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.350168 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-config-data\") pod \"ff7025d7-6055-4219-ae31-8b601082073d\" (UID: \"ff7025d7-6055-4219-ae31-8b601082073d\") " Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.350840 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff7025d7-6055-4219-ae31-8b601082073d-logs" (OuterVolumeSpecName: "logs") pod "ff7025d7-6055-4219-ae31-8b601082073d" (UID: "ff7025d7-6055-4219-ae31-8b601082073d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.354284 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7025d7-6055-4219-ae31-8b601082073d-kube-api-access-cjb8m" (OuterVolumeSpecName: "kube-api-access-cjb8m") pod "ff7025d7-6055-4219-ae31-8b601082073d" (UID: "ff7025d7-6055-4219-ae31-8b601082073d"). InnerVolumeSpecName "kube-api-access-cjb8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.365973 4780 generic.go:334] "Generic (PLEG): container finished" podID="ff7025d7-6055-4219-ae31-8b601082073d" containerID="dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35" exitCode=0 Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.366051 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.366067 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7025d7-6055-4219-ae31-8b601082073d","Type":"ContainerDied","Data":"dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35"} Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.366102 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ff7025d7-6055-4219-ae31-8b601082073d","Type":"ContainerDied","Data":"a20de16eee26524dc42e6f9b9706a60ac94ffb74d80dcdd0c422c45d835e94a5"} Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.366119 4780 scope.go:117] "RemoveContainer" containerID="dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.366051 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.380297 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff7025d7-6055-4219-ae31-8b601082073d" (UID: "ff7025d7-6055-4219-ae31-8b601082073d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.392148 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-config-data" (OuterVolumeSpecName: "config-data") pod "ff7025d7-6055-4219-ae31-8b601082073d" (UID: "ff7025d7-6055-4219-ae31-8b601082073d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.452140 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.452431 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff7025d7-6055-4219-ae31-8b601082073d-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.452442 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjb8m\" (UniqueName: \"kubernetes.io/projected/ff7025d7-6055-4219-ae31-8b601082073d-kube-api-access-cjb8m\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.452453 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff7025d7-6055-4219-ae31-8b601082073d-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.459502 4780 scope.go:117] "RemoveContainer" containerID="50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.466098 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.483928 4780 scope.go:117] "RemoveContainer" containerID="dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35" Dec 05 07:09:11 crc kubenswrapper[4780]: E1205 07:09:11.484451 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35\": container with ID starting with dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35 not found: ID does not exist" containerID="dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.484504 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35"} err="failed to get container status \"dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35\": rpc error: code = NotFound desc = could not find container \"dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35\": container with ID starting with dee1b5817a7e1caf3f04aaf9a987d514e5e2a0308948830b1f068debadbe3b35 not found: ID does not exist" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.484533 4780 scope.go:117] "RemoveContainer" containerID="50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275" Dec 05 07:09:11 crc kubenswrapper[4780]: E1205 07:09:11.484854 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275\": container with ID starting with 50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275 not found: ID does not exist" containerID="50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.484974 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275"} err="failed to get container status \"50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275\": rpc error: code = NotFound desc = could not find container \"50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275\": container with ID starting with 50d4e53d9c8a3fba818f9e38ed0d9b9de8ff8d425618f9735a3ba6ecb6205275 not found: ID does not exist" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.487487 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.497849 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:11 crc kubenswrapper[4780]: E1205 07:09:11.498355 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7025d7-6055-4219-ae31-8b601082073d" containerName="nova-api-api" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.498382 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7025d7-6055-4219-ae31-8b601082073d" containerName="nova-api-api" Dec 05 07:09:11 crc kubenswrapper[4780]: E1205 07:09:11.498417 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff7025d7-6055-4219-ae31-8b601082073d" containerName="nova-api-log" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.498424 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff7025d7-6055-4219-ae31-8b601082073d" containerName="nova-api-log" Dec 05 07:09:11 crc kubenswrapper[4780]: E1205 07:09:11.498438 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890c803f-aa78-4d31-a88a-eabd11580461" containerName="nova-scheduler-scheduler" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.498444 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="890c803f-aa78-4d31-a88a-eabd11580461" containerName="nova-scheduler-scheduler" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.498613 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="890c803f-aa78-4d31-a88a-eabd11580461" containerName="nova-scheduler-scheduler" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.498634 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7025d7-6055-4219-ae31-8b601082073d" containerName="nova-api-api" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.498643 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7025d7-6055-4219-ae31-8b601082073d" containerName="nova-api-log" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.499539 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.502488 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.511746 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.554145 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbvfr\" (UniqueName: \"kubernetes.io/projected/b0139888-4d6b-4749-894c-46a370518e12-kube-api-access-gbvfr\") pod \"nova-scheduler-0\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.554518 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-config-data\") pod \"nova-scheduler-0\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.554602 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.655936 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.656001 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbvfr\" (UniqueName: \"kubernetes.io/projected/b0139888-4d6b-4749-894c-46a370518e12-kube-api-access-gbvfr\") pod \"nova-scheduler-0\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.656120 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-config-data\") pod \"nova-scheduler-0\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.660182 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-config-data\") pod \"nova-scheduler-0\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.660236 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.674603 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbvfr\" (UniqueName: \"kubernetes.io/projected/b0139888-4d6b-4749-894c-46a370518e12-kube-api-access-gbvfr\") pod \"nova-scheduler-0\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.703069 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.726194 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.736055 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.738024 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.741455 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.745231 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.757730 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hn9j\" (UniqueName: \"kubernetes.io/projected/8b66a8ee-b2bb-41d3-8a32-498b297ed509-kube-api-access-7hn9j\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.757993 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-config-data\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.758029 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.758068 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b66a8ee-b2bb-41d3-8a32-498b297ed509-logs\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.818186 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.860383 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b66a8ee-b2bb-41d3-8a32-498b297ed509-logs\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.860602 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hn9j\" (UniqueName: \"kubernetes.io/projected/8b66a8ee-b2bb-41d3-8a32-498b297ed509-kube-api-access-7hn9j\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.860677 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-config-data\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.860699 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.860966 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b66a8ee-b2bb-41d3-8a32-498b297ed509-logs\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.865114 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-config-data\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.865677 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:11 crc kubenswrapper[4780]: I1205 07:09:11.879112 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hn9j\" (UniqueName: \"kubernetes.io/projected/8b66a8ee-b2bb-41d3-8a32-498b297ed509-kube-api-access-7hn9j\") pod \"nova-api-0\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " pod="openstack/nova-api-0" Dec 05 07:09:12 crc kubenswrapper[4780]: I1205 07:09:12.061985 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:12 crc kubenswrapper[4780]: I1205 07:09:12.174014 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="890c803f-aa78-4d31-a88a-eabd11580461" path="/var/lib/kubelet/pods/890c803f-aa78-4d31-a88a-eabd11580461/volumes" Dec 05 07:09:12 crc kubenswrapper[4780]: I1205 07:09:12.174744 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff7025d7-6055-4219-ae31-8b601082073d" path="/var/lib/kubelet/pods/ff7025d7-6055-4219-ae31-8b601082073d/volumes" Dec 05 07:09:12 crc kubenswrapper[4780]: I1205 07:09:12.244338 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:12 crc kubenswrapper[4780]: W1205 07:09:12.244609 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0139888_4d6b_4749_894c_46a370518e12.slice/crio-5312fc8ca11f26def96e6acb76cff7b398dbb57ee26a272780c46ca55db82914 WatchSource:0}: Error finding container 5312fc8ca11f26def96e6acb76cff7b398dbb57ee26a272780c46ca55db82914: Status 404 returned error can't find the container with id 5312fc8ca11f26def96e6acb76cff7b398dbb57ee26a272780c46ca55db82914 Dec 05 07:09:12 crc kubenswrapper[4780]: I1205 07:09:12.376869 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0139888-4d6b-4749-894c-46a370518e12","Type":"ContainerStarted","Data":"5312fc8ca11f26def96e6acb76cff7b398dbb57ee26a272780c46ca55db82914"} Dec 05 07:09:12 crc kubenswrapper[4780]: I1205 07:09:12.499844 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:12 crc kubenswrapper[4780]: W1205 07:09:12.502398 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b66a8ee_b2bb_41d3_8a32_498b297ed509.slice/crio-d2561a270e530b3278784f5fa9b1e242d55ca64c7cb9080b25d994a5c3175adf WatchSource:0}: Error finding container d2561a270e530b3278784f5fa9b1e242d55ca64c7cb9080b25d994a5c3175adf: Status 404 returned error can't find the container with id d2561a270e530b3278784f5fa9b1e242d55ca64c7cb9080b25d994a5c3175adf Dec 05 07:09:13 crc kubenswrapper[4780]: I1205 07:09:13.386780 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0139888-4d6b-4749-894c-46a370518e12","Type":"ContainerStarted","Data":"8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210"} Dec 05 07:09:13 crc kubenswrapper[4780]: I1205 07:09:13.389637 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b66a8ee-b2bb-41d3-8a32-498b297ed509","Type":"ContainerStarted","Data":"fd3575cc6c6b206010006cb993a056ca1dc50179ebe384773c309cb86c0019d8"} Dec 05 07:09:13 crc kubenswrapper[4780]: I1205 07:09:13.389668 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b66a8ee-b2bb-41d3-8a32-498b297ed509","Type":"ContainerStarted","Data":"464be03f9bdcb1a59cd89ceed6efcfee72f197ab8874091b225281237a7c657e"} Dec 05 07:09:13 crc kubenswrapper[4780]: I1205 07:09:13.389687 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b66a8ee-b2bb-41d3-8a32-498b297ed509","Type":"ContainerStarted","Data":"d2561a270e530b3278784f5fa9b1e242d55ca64c7cb9080b25d994a5c3175adf"} Dec 05 07:09:13 crc kubenswrapper[4780]: I1205 07:09:13.407854 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.407834462 podStartE2EDuration="2.407834462s" podCreationTimestamp="2025-12-05 07:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:13.40257722 +0000 UTC m=+1387.472093562" watchObservedRunningTime="2025-12-05 07:09:13.407834462 +0000 UTC m=+1387.477350804" Dec 05 07:09:13 crc kubenswrapper[4780]: I1205 07:09:13.427933 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.427909774 podStartE2EDuration="2.427909774s" podCreationTimestamp="2025-12-05 07:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:13.418212073 +0000 UTC m=+1387.487728435" watchObservedRunningTime="2025-12-05 07:09:13.427909774 +0000 UTC m=+1387.497426116" Dec 05 07:09:15 crc kubenswrapper[4780]: I1205 07:09:15.759141 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 05 07:09:16 crc kubenswrapper[4780]: I1205 07:09:16.818774 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 07:09:21 crc kubenswrapper[4780]: I1205 07:09:21.438137 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 07:09:21 crc kubenswrapper[4780]: I1205 07:09:21.818853 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 07:09:21 crc kubenswrapper[4780]: I1205 07:09:21.872284 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 07:09:22 crc kubenswrapper[4780]: I1205 07:09:22.062559 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 07:09:22 crc kubenswrapper[4780]: I1205 07:09:22.062642 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 07:09:22 crc kubenswrapper[4780]: I1205 07:09:22.509755 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 07:09:23 crc kubenswrapper[4780]: I1205 07:09:23.145172 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 07:09:23 crc kubenswrapper[4780]: I1205 07:09:23.145190 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 07:09:25 crc kubenswrapper[4780]: I1205 07:09:25.194362 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:09:25 crc kubenswrapper[4780]: I1205 07:09:25.194941 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1" containerName="kube-state-metrics" containerID="cri-o://5b993d0922d34d413538759bce43f543fde767319d1977a38555ec9962eb3d8c" gracePeriod=30 Dec 05 07:09:25 crc kubenswrapper[4780]: I1205 07:09:25.509310 4780 generic.go:334] "Generic (PLEG): container finished" podID="e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1" containerID="5b993d0922d34d413538759bce43f543fde767319d1977a38555ec9962eb3d8c" exitCode=2 Dec 05 07:09:25 crc kubenswrapper[4780]: I1205 07:09:25.509414 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1","Type":"ContainerDied","Data":"5b993d0922d34d413538759bce43f543fde767319d1977a38555ec9962eb3d8c"} Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.064537 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.189121 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsq58\" (UniqueName: \"kubernetes.io/projected/e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1-kube-api-access-vsq58\") pod \"e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1\" (UID: \"e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1\") " Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.194832 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1-kube-api-access-vsq58" (OuterVolumeSpecName: "kube-api-access-vsq58") pod "e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1" (UID: "e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1"). InnerVolumeSpecName "kube-api-access-vsq58". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.291241 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsq58\" (UniqueName: \"kubernetes.io/projected/e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1-kube-api-access-vsq58\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.518899 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1","Type":"ContainerDied","Data":"5162f3835cf1ddd3c34f5f74fda2f5f6aa68265a6b2e735fc6b073a09e822ff6"} Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.518982 4780 scope.go:117] "RemoveContainer" containerID="5b993d0922d34d413538759bce43f543fde767319d1977a38555ec9962eb3d8c" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.518986 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.565910 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.582566 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.596301 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:09:26 crc kubenswrapper[4780]: E1205 07:09:26.600143 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1" containerName="kube-state-metrics" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.600169 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1" containerName="kube-state-metrics" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.600365 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1" containerName="kube-state-metrics" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.601029 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.603302 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.603530 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.627343 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.696877 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.696987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.697048 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffxd7\" (UniqueName: \"kubernetes.io/projected/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-api-access-ffxd7\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.697106 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: E1205 07:09:26.724616 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6bc5ee0_415c_454f_b8d3_85efd5fe6ab1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6bc5ee0_415c_454f_b8d3_85efd5fe6ab1.slice/crio-5162f3835cf1ddd3c34f5f74fda2f5f6aa68265a6b2e735fc6b073a09e822ff6\": RecentStats: unable to find data in memory cache]" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.798663 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.798769 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffxd7\" (UniqueName: \"kubernetes.io/projected/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-api-access-ffxd7\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.798832 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.798955 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.805238 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.805416 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.805468 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.820997 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffxd7\" (UniqueName: \"kubernetes.io/projected/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-api-access-ffxd7\") pod \"kube-state-metrics-0\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " pod="openstack/kube-state-metrics-0" Dec 05 07:09:26 crc kubenswrapper[4780]: I1205 07:09:26.922756 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.050871 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.051507 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="ceilometer-central-agent" containerID="cri-o://df5d8f584255ce7a41ab09763b7d0e65cff7f8332d83b675de804e93cddea0c7" gracePeriod=30 Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.051766 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="ceilometer-notification-agent" containerID="cri-o://9aa68986804bdba27dfae6148f0e504ceca1ead026dae332ef3b845cada8ded2" gracePeriod=30 Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.051835 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="sg-core" containerID="cri-o://e319c16404d228e3b9f27ff16585859030e0d4eeae70292340fcfe04695d2e07" gracePeriod=30 Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.051820 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="proxy-httpd" containerID="cri-o://a4a74b22f43ec1f7bb567a26826d7c7ec86726652e7641ef686b6f50fbe33c68" gracePeriod=30 Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.400139 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:09:27 crc kubenswrapper[4780]: W1205 07:09:27.402279 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfe98bcd_7b01_4246_9879_15ed51cf7a1f.slice/crio-9cee6cdde86ca1b3353b51ca195794d427e521b11f33d6c715100757aed4f996 WatchSource:0}: Error finding container 9cee6cdde86ca1b3353b51ca195794d427e521b11f33d6c715100757aed4f996: Status 404 returned error can't find the container with id 9cee6cdde86ca1b3353b51ca195794d427e521b11f33d6c715100757aed4f996 Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.530409 4780 generic.go:334] "Generic (PLEG): container finished" podID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerID="a4a74b22f43ec1f7bb567a26826d7c7ec86726652e7641ef686b6f50fbe33c68" exitCode=0 Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.530731 4780 generic.go:334] "Generic (PLEG): container finished" podID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerID="e319c16404d228e3b9f27ff16585859030e0d4eeae70292340fcfe04695d2e07" exitCode=2 Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.530556 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d","Type":"ContainerDied","Data":"a4a74b22f43ec1f7bb567a26826d7c7ec86726652e7641ef686b6f50fbe33c68"} Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.530794 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d","Type":"ContainerDied","Data":"e319c16404d228e3b9f27ff16585859030e0d4eeae70292340fcfe04695d2e07"} Dec 05 07:09:27 crc kubenswrapper[4780]: I1205 07:09:27.532338 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfe98bcd-7b01-4246-9879-15ed51cf7a1f","Type":"ContainerStarted","Data":"9cee6cdde86ca1b3353b51ca195794d427e521b11f33d6c715100757aed4f996"} Dec 05 07:09:28 crc kubenswrapper[4780]: I1205 07:09:28.149818 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1" path="/var/lib/kubelet/pods/e6bc5ee0-415c-454f-b8d3-85efd5fe6ab1/volumes" Dec 05 07:09:28 crc kubenswrapper[4780]: I1205 07:09:28.546271 4780 generic.go:334] "Generic (PLEG): container finished" podID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerID="df5d8f584255ce7a41ab09763b7d0e65cff7f8332d83b675de804e93cddea0c7" exitCode=0 Dec 05 07:09:28 crc kubenswrapper[4780]: I1205 07:09:28.546320 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d","Type":"ContainerDied","Data":"df5d8f584255ce7a41ab09763b7d0e65cff7f8332d83b675de804e93cddea0c7"} Dec 05 07:09:28 crc kubenswrapper[4780]: I1205 07:09:28.548179 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfe98bcd-7b01-4246-9879-15ed51cf7a1f","Type":"ContainerStarted","Data":"a959f6bc66c2db1c1600ed04dc5d26591b5e87880b38d5a29268b847e514d376"} Dec 05 07:09:28 crc kubenswrapper[4780]: I1205 07:09:28.548386 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 07:09:28 crc kubenswrapper[4780]: I1205 07:09:28.572173 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.20516385 podStartE2EDuration="2.572150878s" podCreationTimestamp="2025-12-05 07:09:26 +0000 UTC" firstStartedPulling="2025-12-05 07:09:27.40543001 +0000 UTC m=+1401.474946342" lastFinishedPulling="2025-12-05 07:09:27.772417038 +0000 UTC m=+1401.841933370" observedRunningTime="2025-12-05 07:09:28.561362087 +0000 UTC m=+1402.630878429" watchObservedRunningTime="2025-12-05 07:09:28.572150878 +0000 UTC m=+1402.641667210" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.558872 4780 generic.go:334] "Generic (PLEG): container finished" podID="6b842be9-82f0-42b1-aee2-97452b8cda61" containerID="69d9fb1b4be7a54df2d658eec2e895ce95479d6848210731e3f484c81badd3c6" exitCode=137 Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.559032 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b842be9-82f0-42b1-aee2-97452b8cda61","Type":"ContainerDied","Data":"69d9fb1b4be7a54df2d658eec2e895ce95479d6848210731e3f484c81badd3c6"} Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.575469 4780 generic.go:334] "Generic (PLEG): container finished" podID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" containerID="57f0d56f57dba5a76b5ae9c524ad14669a4cf2640f360dae3d76c67b3ea0971c" exitCode=137 Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.575698 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2815b480-60d3-4d1d-af40-b7bfb6d6b57f","Type":"ContainerDied","Data":"57f0d56f57dba5a76b5ae9c524ad14669a4cf2640f360dae3d76c67b3ea0971c"} Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.673132 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.680444 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.752084 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-config-data\") pod \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.752226 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlhl2\" (UniqueName: \"kubernetes.io/projected/6b842be9-82f0-42b1-aee2-97452b8cda61-kube-api-access-vlhl2\") pod \"6b842be9-82f0-42b1-aee2-97452b8cda61\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.752300 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-config-data\") pod \"6b842be9-82f0-42b1-aee2-97452b8cda61\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.752340 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz5bp\" (UniqueName: \"kubernetes.io/projected/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-kube-api-access-zz5bp\") pod \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.752362 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-logs\") pod \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.752387 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-combined-ca-bundle\") pod \"6b842be9-82f0-42b1-aee2-97452b8cda61\" (UID: \"6b842be9-82f0-42b1-aee2-97452b8cda61\") " Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.752414 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-combined-ca-bundle\") pod \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\" (UID: \"2815b480-60d3-4d1d-af40-b7bfb6d6b57f\") " Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.754199 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-logs" (OuterVolumeSpecName: "logs") pod "2815b480-60d3-4d1d-af40-b7bfb6d6b57f" (UID: "2815b480-60d3-4d1d-af40-b7bfb6d6b57f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.761787 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-kube-api-access-zz5bp" (OuterVolumeSpecName: "kube-api-access-zz5bp") pod "2815b480-60d3-4d1d-af40-b7bfb6d6b57f" (UID: "2815b480-60d3-4d1d-af40-b7bfb6d6b57f"). InnerVolumeSpecName "kube-api-access-zz5bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.771214 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b842be9-82f0-42b1-aee2-97452b8cda61-kube-api-access-vlhl2" (OuterVolumeSpecName: "kube-api-access-vlhl2") pod "6b842be9-82f0-42b1-aee2-97452b8cda61" (UID: "6b842be9-82f0-42b1-aee2-97452b8cda61"). InnerVolumeSpecName "kube-api-access-vlhl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.787069 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-config-data" (OuterVolumeSpecName: "config-data") pod "6b842be9-82f0-42b1-aee2-97452b8cda61" (UID: "6b842be9-82f0-42b1-aee2-97452b8cda61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.787646 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-config-data" (OuterVolumeSpecName: "config-data") pod "2815b480-60d3-4d1d-af40-b7bfb6d6b57f" (UID: "2815b480-60d3-4d1d-af40-b7bfb6d6b57f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.800147 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b842be9-82f0-42b1-aee2-97452b8cda61" (UID: "6b842be9-82f0-42b1-aee2-97452b8cda61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.806485 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2815b480-60d3-4d1d-af40-b7bfb6d6b57f" (UID: "2815b480-60d3-4d1d-af40-b7bfb6d6b57f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.855023 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.855154 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz5bp\" (UniqueName: \"kubernetes.io/projected/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-kube-api-access-zz5bp\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.855225 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.855289 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b842be9-82f0-42b1-aee2-97452b8cda61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.855395 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.855461 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2815b480-60d3-4d1d-af40-b7bfb6d6b57f-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.855526 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlhl2\" (UniqueName: \"kubernetes.io/projected/6b842be9-82f0-42b1-aee2-97452b8cda61-kube-api-access-vlhl2\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.908533 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:09:29 crc kubenswrapper[4780]: I1205 07:09:29.908601 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.585874 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b842be9-82f0-42b1-aee2-97452b8cda61","Type":"ContainerDied","Data":"d1adfc93afcde0b563f60c5300defa5bb1450a1aa9ef48e6efeb74b20a19a75c"} Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.586010 4780 scope.go:117] "RemoveContainer" containerID="69d9fb1b4be7a54df2d658eec2e895ce95479d6848210731e3f484c81badd3c6" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.585923 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.589407 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2815b480-60d3-4d1d-af40-b7bfb6d6b57f","Type":"ContainerDied","Data":"71aca912ba4e947d67692d30fb79844a4560fb9ce59cc5d3e257a1ae2b522f16"} Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.589540 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.615122 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.615939 4780 scope.go:117] "RemoveContainer" containerID="57f0d56f57dba5a76b5ae9c524ad14669a4cf2640f360dae3d76c67b3ea0971c" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.630854 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.650536 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.654943 4780 scope.go:117] "RemoveContainer" containerID="5b1b905a6f84e56b61f4ec8b3c8c3a526b1d095c6d113428524ee109306574f3" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.662391 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.675500 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:30 crc kubenswrapper[4780]: E1205 07:09:30.676009 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b842be9-82f0-42b1-aee2-97452b8cda61" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.676029 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b842be9-82f0-42b1-aee2-97452b8cda61" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 07:09:30 crc kubenswrapper[4780]: E1205 07:09:30.676039 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" containerName="nova-metadata-log" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.676045 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" containerName="nova-metadata-log" Dec 05 07:09:30 crc kubenswrapper[4780]: E1205 07:09:30.676058 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" containerName="nova-metadata-metadata" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.676065 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" containerName="nova-metadata-metadata" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.676277 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" containerName="nova-metadata-metadata" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.676300 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" containerName="nova-metadata-log" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.676319 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b842be9-82f0-42b1-aee2-97452b8cda61" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.677278 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.680239 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.680689 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.694476 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.696590 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.703409 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.703513 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.703757 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.715625 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.730168 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.776530 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.776690 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.776772 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.776844 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.776928 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.776952 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e81085b-5e05-4f2a-8753-dff4325bb9ee-logs\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.776985 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zgh\" (UniqueName: \"kubernetes.io/projected/828f916b-54ac-4498-b1a7-139334944d9b-kube-api-access-d5zgh\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.777036 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.777084 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-config-data\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.777115 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fskd\" (UniqueName: \"kubernetes.io/projected/3e81085b-5e05-4f2a-8753-dff4325bb9ee-kube-api-access-4fskd\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.879274 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5zgh\" (UniqueName: \"kubernetes.io/projected/828f916b-54ac-4498-b1a7-139334944d9b-kube-api-access-d5zgh\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.879614 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.879642 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-config-data\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.879663 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fskd\" (UniqueName: \"kubernetes.io/projected/3e81085b-5e05-4f2a-8753-dff4325bb9ee-kube-api-access-4fskd\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.879740 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.879770 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.879805 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.879839 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.879869 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.879928 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e81085b-5e05-4f2a-8753-dff4325bb9ee-logs\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.880502 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e81085b-5e05-4f2a-8753-dff4325bb9ee-logs\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.885722 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.886441 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.886923 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.887805 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.888065 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.896221 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-config-data\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.899015 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.899301 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5zgh\" (UniqueName: \"kubernetes.io/projected/828f916b-54ac-4498-b1a7-139334944d9b-kube-api-access-d5zgh\") pod \"nova-cell1-novncproxy-0\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:30 crc kubenswrapper[4780]: I1205 07:09:30.909226 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fskd\" (UniqueName: \"kubernetes.io/projected/3e81085b-5e05-4f2a-8753-dff4325bb9ee-kube-api-access-4fskd\") pod \"nova-metadata-0\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " pod="openstack/nova-metadata-0" Dec 05 07:09:31 crc kubenswrapper[4780]: I1205 07:09:31.004049 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:09:31 crc kubenswrapper[4780]: I1205 07:09:31.024852 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:31 crc kubenswrapper[4780]: I1205 07:09:31.500233 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:09:31 crc kubenswrapper[4780]: W1205 07:09:31.511442 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e81085b_5e05_4f2a_8753_dff4325bb9ee.slice/crio-f9595aad309cc4f5de1d16f8c8339a1cde6de1b9096337ea7d09905778a7cbf7 WatchSource:0}: Error finding container f9595aad309cc4f5de1d16f8c8339a1cde6de1b9096337ea7d09905778a7cbf7: Status 404 returned error can't find the container with id f9595aad309cc4f5de1d16f8c8339a1cde6de1b9096337ea7d09905778a7cbf7 Dec 05 07:09:31 crc kubenswrapper[4780]: W1205 07:09:31.511757 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod828f916b_54ac_4498_b1a7_139334944d9b.slice/crio-e94759d132052653acf4e7af754c43f39443db4761339eb8878325c032b9c1d6 WatchSource:0}: Error finding container e94759d132052653acf4e7af754c43f39443db4761339eb8878325c032b9c1d6: Status 404 returned error can't find the container with id e94759d132052653acf4e7af754c43f39443db4761339eb8878325c032b9c1d6 Dec 05 07:09:31 crc kubenswrapper[4780]: I1205 07:09:31.512538 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:31 crc kubenswrapper[4780]: I1205 07:09:31.602951 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"828f916b-54ac-4498-b1a7-139334944d9b","Type":"ContainerStarted","Data":"e94759d132052653acf4e7af754c43f39443db4761339eb8878325c032b9c1d6"} Dec 05 07:09:31 crc kubenswrapper[4780]: I1205 07:09:31.604176 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e81085b-5e05-4f2a-8753-dff4325bb9ee","Type":"ContainerStarted","Data":"f9595aad309cc4f5de1d16f8c8339a1cde6de1b9096337ea7d09905778a7cbf7"} Dec 05 07:09:31 crc kubenswrapper[4780]: I1205 07:09:31.617753 4780 generic.go:334] "Generic (PLEG): container finished" podID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerID="9aa68986804bdba27dfae6148f0e504ceca1ead026dae332ef3b845cada8ded2" exitCode=0 Dec 05 07:09:31 crc kubenswrapper[4780]: I1205 07:09:31.617828 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d","Type":"ContainerDied","Data":"9aa68986804bdba27dfae6148f0e504ceca1ead026dae332ef3b845cada8ded2"} Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.068206 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.068612 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.069154 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.069219 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.072308 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.076328 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.149291 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2815b480-60d3-4d1d-af40-b7bfb6d6b57f" path="/var/lib/kubelet/pods/2815b480-60d3-4d1d-af40-b7bfb6d6b57f/volumes" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.149882 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b842be9-82f0-42b1-aee2-97452b8cda61" path="/var/lib/kubelet/pods/6b842be9-82f0-42b1-aee2-97452b8cda61/volumes" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.188437 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.305260 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-sgn25"] Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.305469 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-run-httpd\") pod \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.305511 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-sg-core-conf-yaml\") pod \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.305540 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-combined-ca-bundle\") pod \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " Dec 05 07:09:32 crc kubenswrapper[4780]: E1205 07:09:32.305737 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="ceilometer-notification-agent" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.305757 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="ceilometer-notification-agent" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.305785 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-log-httpd\") pod \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " Dec 05 07:09:32 crc kubenswrapper[4780]: E1205 07:09:32.305782 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="ceilometer-central-agent" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.305825 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="ceilometer-central-agent" Dec 05 07:09:32 crc kubenswrapper[4780]: E1205 07:09:32.305840 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="sg-core" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.305847 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="sg-core" Dec 05 07:09:32 crc kubenswrapper[4780]: E1205 07:09:32.305866 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="proxy-httpd" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.305877 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="proxy-httpd" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.306165 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="proxy-httpd" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.306190 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="ceilometer-central-agent" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.306198 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="sg-core" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.306210 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" containerName="ceilometer-notification-agent" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.307538 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.305814 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dfvw\" (UniqueName: \"kubernetes.io/projected/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-kube-api-access-5dfvw\") pod \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.308424 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-config-data\") pod \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.308483 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-scripts\") pod \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\" (UID: \"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d\") " Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.309023 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" (UID: "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.316486 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.336732 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-kube-api-access-5dfvw" (OuterVolumeSpecName: "kube-api-access-5dfvw") pod "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" (UID: "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d"). InnerVolumeSpecName "kube-api-access-5dfvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.337128 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" (UID: "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.337203 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-scripts" (OuterVolumeSpecName: "scripts") pod "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" (UID: "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.384831 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-sgn25"] Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.420492 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.420553 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jbmk\" (UniqueName: \"kubernetes.io/projected/b7f1d4f8-b32f-4448-8db1-ff7299256169-kube-api-access-5jbmk\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.420608 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-swift-storage-0\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.420637 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-svc\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.420708 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.420746 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-config\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.420921 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.420949 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dfvw\" (UniqueName: \"kubernetes.io/projected/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-kube-api-access-5dfvw\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.420964 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.440063 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" (UID: "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.482466 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" (UID: "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.522525 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.522605 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-config\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.522718 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.522736 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jbmk\" (UniqueName: \"kubernetes.io/projected/b7f1d4f8-b32f-4448-8db1-ff7299256169-kube-api-access-5jbmk\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.522770 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-swift-storage-0\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.522785 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-svc\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.522843 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.522892 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.524105 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-svc\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.524211 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.524256 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-config\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.524758 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-swift-storage-0\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.524800 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.534087 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-config-data" (OuterVolumeSpecName: "config-data") pod "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" (UID: "89b81e7c-ad4f-44cc-86f0-f36eabb3c45d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.539174 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jbmk\" (UniqueName: \"kubernetes.io/projected/b7f1d4f8-b32f-4448-8db1-ff7299256169-kube-api-access-5jbmk\") pod \"dnsmasq-dns-7f9fbbf6f7-sgn25\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.624277 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.631776 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89b81e7c-ad4f-44cc-86f0-f36eabb3c45d","Type":"ContainerDied","Data":"cd7cb29de7dc7de4c49b7dbae4d680fbd823be8c80bccc561e3a2863f0858567"} Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.631818 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.631835 4780 scope.go:117] "RemoveContainer" containerID="a4a74b22f43ec1f7bb567a26826d7c7ec86726652e7641ef686b6f50fbe33c68" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.634403 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"828f916b-54ac-4498-b1a7-139334944d9b","Type":"ContainerStarted","Data":"059a93cd192837f3113a8d0e3807d7b3f8ab87f622645af37531a78b78d5d7f8"} Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.641124 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e81085b-5e05-4f2a-8753-dff4325bb9ee","Type":"ContainerStarted","Data":"e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac"} Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.641171 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e81085b-5e05-4f2a-8753-dff4325bb9ee","Type":"ContainerStarted","Data":"51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a"} Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.655496 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.655479748 podStartE2EDuration="2.655479748s" podCreationTimestamp="2025-12-05 07:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:32.654010518 +0000 UTC m=+1406.723526850" watchObservedRunningTime="2025-12-05 07:09:32.655479748 +0000 UTC m=+1406.724996080" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.667993 4780 scope.go:117] "RemoveContainer" containerID="e319c16404d228e3b9f27ff16585859030e0d4eeae70292340fcfe04695d2e07" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.689058 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.689033813 podStartE2EDuration="2.689033813s" podCreationTimestamp="2025-12-05 07:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:32.684647045 +0000 UTC m=+1406.754163397" watchObservedRunningTime="2025-12-05 07:09:32.689033813 +0000 UTC m=+1406.758550145" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.707467 4780 scope.go:117] "RemoveContainer" containerID="9aa68986804bdba27dfae6148f0e504ceca1ead026dae332ef3b845cada8ded2" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.723948 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.742844 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.753496 4780 scope.go:117] "RemoveContainer" containerID="df5d8f584255ce7a41ab09763b7d0e65cff7f8332d83b675de804e93cddea0c7" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.754370 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.757118 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.759756 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.759983 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.761675 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.764220 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.765522 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.827744 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.827837 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-log-httpd\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.827865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-run-httpd\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.827901 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.827942 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-scripts\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.827987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-config-data\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.828017 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hql4k\" (UniqueName: \"kubernetes.io/projected/9ec83d54-b768-4738-a2e4-9c77747d64e7-kube-api-access-hql4k\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.828070 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.930631 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-config-data\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.931148 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hql4k\" (UniqueName: \"kubernetes.io/projected/9ec83d54-b768-4738-a2e4-9c77747d64e7-kube-api-access-hql4k\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.932263 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.932362 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.946493 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-log-httpd\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.947919 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.935442 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-log-httpd\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.948143 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-run-httpd\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.948626 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-run-httpd\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.948735 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-config-data\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.948742 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.948962 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-scripts\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.949686 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.951138 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hql4k\" (UniqueName: \"kubernetes.io/projected/9ec83d54-b768-4738-a2e4-9c77747d64e7-kube-api-access-hql4k\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.953191 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-scripts\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:32 crc kubenswrapper[4780]: I1205 07:09:32.958528 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " pod="openstack/ceilometer-0" Dec 05 07:09:33 crc kubenswrapper[4780]: I1205 07:09:33.090166 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:09:33 crc kubenswrapper[4780]: I1205 07:09:33.257596 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-sgn25"] Dec 05 07:09:33 crc kubenswrapper[4780]: I1205 07:09:33.557982 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:09:33 crc kubenswrapper[4780]: W1205 07:09:33.562983 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ec83d54_b768_4738_a2e4_9c77747d64e7.slice/crio-7ac1da3faa8e25e3eb3eea32c1990f6eee81fceb00b175a88ee34e2640090fcd WatchSource:0}: Error finding container 7ac1da3faa8e25e3eb3eea32c1990f6eee81fceb00b175a88ee34e2640090fcd: Status 404 returned error can't find the container with id 7ac1da3faa8e25e3eb3eea32c1990f6eee81fceb00b175a88ee34e2640090fcd Dec 05 07:09:33 crc kubenswrapper[4780]: I1205 07:09:33.650178 4780 generic.go:334] "Generic (PLEG): container finished" podID="b7f1d4f8-b32f-4448-8db1-ff7299256169" containerID="7678f8c76dd9a388582a2e246f59d292a26940453c846b3140848ca635c8c94c" exitCode=0 Dec 05 07:09:33 crc kubenswrapper[4780]: I1205 07:09:33.650393 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" event={"ID":"b7f1d4f8-b32f-4448-8db1-ff7299256169","Type":"ContainerDied","Data":"7678f8c76dd9a388582a2e246f59d292a26940453c846b3140848ca635c8c94c"} Dec 05 07:09:33 crc kubenswrapper[4780]: I1205 07:09:33.650560 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" event={"ID":"b7f1d4f8-b32f-4448-8db1-ff7299256169","Type":"ContainerStarted","Data":"41a94c6ad8e328e7b0b5a2bfb216a6dd21eadb38d82c4a28034f6ea978d7c63f"} Dec 05 07:09:33 crc kubenswrapper[4780]: I1205 07:09:33.653584 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ec83d54-b768-4738-a2e4-9c77747d64e7","Type":"ContainerStarted","Data":"7ac1da3faa8e25e3eb3eea32c1990f6eee81fceb00b175a88ee34e2640090fcd"} Dec 05 07:09:34 crc kubenswrapper[4780]: I1205 07:09:34.170220 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89b81e7c-ad4f-44cc-86f0-f36eabb3c45d" path="/var/lib/kubelet/pods/89b81e7c-ad4f-44cc-86f0-f36eabb3c45d/volumes" Dec 05 07:09:34 crc kubenswrapper[4780]: I1205 07:09:34.664030 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ec83d54-b768-4738-a2e4-9c77747d64e7","Type":"ContainerStarted","Data":"b9f87c3eba2fae6410181f1cfccefbdd7a818f8f594f45c68f1af0f1deb78353"} Dec 05 07:09:34 crc kubenswrapper[4780]: I1205 07:09:34.666542 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" event={"ID":"b7f1d4f8-b32f-4448-8db1-ff7299256169","Type":"ContainerStarted","Data":"c66ff948bbb0223dc5ef04d22b4b1a8ff3bff1768e3f1c9bbb557cbcf9b1c5fc"} Dec 05 07:09:34 crc kubenswrapper[4780]: I1205 07:09:34.666655 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:34 crc kubenswrapper[4780]: I1205 07:09:34.690777 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" podStartSLOduration=2.690760441 podStartE2EDuration="2.690760441s" podCreationTimestamp="2025-12-05 07:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:34.684785699 +0000 UTC m=+1408.754302031" watchObservedRunningTime="2025-12-05 07:09:34.690760441 +0000 UTC m=+1408.760276773" Dec 05 07:09:34 crc kubenswrapper[4780]: I1205 07:09:34.703090 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:09:35 crc kubenswrapper[4780]: I1205 07:09:35.256830 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:35 crc kubenswrapper[4780]: I1205 07:09:35.257409 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerName="nova-api-log" containerID="cri-o://464be03f9bdcb1a59cd89ceed6efcfee72f197ab8874091b225281237a7c657e" gracePeriod=30 Dec 05 07:09:35 crc kubenswrapper[4780]: I1205 07:09:35.257457 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerName="nova-api-api" containerID="cri-o://fd3575cc6c6b206010006cb993a056ca1dc50179ebe384773c309cb86c0019d8" gracePeriod=30 Dec 05 07:09:35 crc kubenswrapper[4780]: I1205 07:09:35.677354 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ec83d54-b768-4738-a2e4-9c77747d64e7","Type":"ContainerStarted","Data":"a20d6cbd57ed2168f88f8759eb95d421a1f35187316266826d960b469948f1fa"} Dec 05 07:09:35 crc kubenswrapper[4780]: I1205 07:09:35.679866 4780 generic.go:334] "Generic (PLEG): container finished" podID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerID="464be03f9bdcb1a59cd89ceed6efcfee72f197ab8874091b225281237a7c657e" exitCode=143 Dec 05 07:09:35 crc kubenswrapper[4780]: I1205 07:09:35.679926 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b66a8ee-b2bb-41d3-8a32-498b297ed509","Type":"ContainerDied","Data":"464be03f9bdcb1a59cd89ceed6efcfee72f197ab8874091b225281237a7c657e"} Dec 05 07:09:36 crc kubenswrapper[4780]: I1205 07:09:36.004237 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 07:09:36 crc kubenswrapper[4780]: I1205 07:09:36.004292 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 07:09:36 crc kubenswrapper[4780]: I1205 07:09:36.025670 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:36 crc kubenswrapper[4780]: I1205 07:09:36.931821 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 07:09:37 crc kubenswrapper[4780]: I1205 07:09:37.701155 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ec83d54-b768-4738-a2e4-9c77747d64e7","Type":"ContainerStarted","Data":"bf8403cc2bb192e0c688d9786ea4076ec9cea534cc6c97016719aeffa054e392"} Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.715070 4780 generic.go:334] "Generic (PLEG): container finished" podID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerID="fd3575cc6c6b206010006cb993a056ca1dc50179ebe384773c309cb86c0019d8" exitCode=0 Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.715585 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b66a8ee-b2bb-41d3-8a32-498b297ed509","Type":"ContainerDied","Data":"fd3575cc6c6b206010006cb993a056ca1dc50179ebe384773c309cb86c0019d8"} Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.720534 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ec83d54-b768-4738-a2e4-9c77747d64e7","Type":"ContainerStarted","Data":"16c73fa0f2a5f44cf47ed2c1b9a24fbd61232e5c1ea4917a71340ce36c55db3d"} Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.720709 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="ceilometer-central-agent" containerID="cri-o://b9f87c3eba2fae6410181f1cfccefbdd7a818f8f594f45c68f1af0f1deb78353" gracePeriod=30 Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.720802 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.721212 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="proxy-httpd" containerID="cri-o://16c73fa0f2a5f44cf47ed2c1b9a24fbd61232e5c1ea4917a71340ce36c55db3d" gracePeriod=30 Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.721261 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="sg-core" containerID="cri-o://bf8403cc2bb192e0c688d9786ea4076ec9cea534cc6c97016719aeffa054e392" gracePeriod=30 Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.721295 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="ceilometer-notification-agent" containerID="cri-o://a20d6cbd57ed2168f88f8759eb95d421a1f35187316266826d960b469948f1fa" gracePeriod=30 Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.753613 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.096980315 podStartE2EDuration="6.753591507s" podCreationTimestamp="2025-12-05 07:09:32 +0000 UTC" firstStartedPulling="2025-12-05 07:09:33.5662084 +0000 UTC m=+1407.635724732" lastFinishedPulling="2025-12-05 07:09:38.222819582 +0000 UTC m=+1412.292335924" observedRunningTime="2025-12-05 07:09:38.741649496 +0000 UTC m=+1412.811165828" watchObservedRunningTime="2025-12-05 07:09:38.753591507 +0000 UTC m=+1412.823107839" Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.817254 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.998575 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-combined-ca-bundle\") pod \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.998906 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-config-data\") pod \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.998997 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b66a8ee-b2bb-41d3-8a32-498b297ed509-logs\") pod \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " Dec 05 07:09:38 crc kubenswrapper[4780]: I1205 07:09:38.999143 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hn9j\" (UniqueName: \"kubernetes.io/projected/8b66a8ee-b2bb-41d3-8a32-498b297ed509-kube-api-access-7hn9j\") pod \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\" (UID: \"8b66a8ee-b2bb-41d3-8a32-498b297ed509\") " Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.003572 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b66a8ee-b2bb-41d3-8a32-498b297ed509-logs" (OuterVolumeSpecName: "logs") pod "8b66a8ee-b2bb-41d3-8a32-498b297ed509" (UID: "8b66a8ee-b2bb-41d3-8a32-498b297ed509"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.003818 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b66a8ee-b2bb-41d3-8a32-498b297ed509-kube-api-access-7hn9j" (OuterVolumeSpecName: "kube-api-access-7hn9j") pod "8b66a8ee-b2bb-41d3-8a32-498b297ed509" (UID: "8b66a8ee-b2bb-41d3-8a32-498b297ed509"). InnerVolumeSpecName "kube-api-access-7hn9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.035720 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b66a8ee-b2bb-41d3-8a32-498b297ed509" (UID: "8b66a8ee-b2bb-41d3-8a32-498b297ed509"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.039140 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-config-data" (OuterVolumeSpecName: "config-data") pod "8b66a8ee-b2bb-41d3-8a32-498b297ed509" (UID: "8b66a8ee-b2bb-41d3-8a32-498b297ed509"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.101552 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.101589 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b66a8ee-b2bb-41d3-8a32-498b297ed509-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.101602 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b66a8ee-b2bb-41d3-8a32-498b297ed509-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.101611 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hn9j\" (UniqueName: \"kubernetes.io/projected/8b66a8ee-b2bb-41d3-8a32-498b297ed509-kube-api-access-7hn9j\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.731654 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b66a8ee-b2bb-41d3-8a32-498b297ed509","Type":"ContainerDied","Data":"d2561a270e530b3278784f5fa9b1e242d55ca64c7cb9080b25d994a5c3175adf"} Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.732029 4780 scope.go:117] "RemoveContainer" containerID="fd3575cc6c6b206010006cb993a056ca1dc50179ebe384773c309cb86c0019d8" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.731721 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.735579 4780 generic.go:334] "Generic (PLEG): container finished" podID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerID="bf8403cc2bb192e0c688d9786ea4076ec9cea534cc6c97016719aeffa054e392" exitCode=2 Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.735612 4780 generic.go:334] "Generic (PLEG): container finished" podID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerID="a20d6cbd57ed2168f88f8759eb95d421a1f35187316266826d960b469948f1fa" exitCode=0 Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.735631 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ec83d54-b768-4738-a2e4-9c77747d64e7","Type":"ContainerDied","Data":"bf8403cc2bb192e0c688d9786ea4076ec9cea534cc6c97016719aeffa054e392"} Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.735763 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ec83d54-b768-4738-a2e4-9c77747d64e7","Type":"ContainerDied","Data":"a20d6cbd57ed2168f88f8759eb95d421a1f35187316266826d960b469948f1fa"} Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.764319 4780 scope.go:117] "RemoveContainer" containerID="464be03f9bdcb1a59cd89ceed6efcfee72f197ab8874091b225281237a7c657e" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.772040 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.788080 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.807777 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:39 crc kubenswrapper[4780]: E1205 07:09:39.808219 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerName="nova-api-log" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.808239 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerName="nova-api-log" Dec 05 07:09:39 crc kubenswrapper[4780]: E1205 07:09:39.808256 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerName="nova-api-api" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.808262 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerName="nova-api-api" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.808460 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerName="nova-api-api" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.808484 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" containerName="nova-api-log" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.809573 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.812512 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.812936 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.824858 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.843789 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.916924 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-config-data\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.916976 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-public-tls-certs\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.917012 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.917034 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a16414ec-df4c-4d62-8005-973dbe521b83-logs\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.917057 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:39 crc kubenswrapper[4780]: I1205 07:09:39.917135 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wvlk\" (UniqueName: \"kubernetes.io/projected/a16414ec-df4c-4d62-8005-973dbe521b83-kube-api-access-9wvlk\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.018974 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a16414ec-df4c-4d62-8005-973dbe521b83-logs\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.019041 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.019173 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wvlk\" (UniqueName: \"kubernetes.io/projected/a16414ec-df4c-4d62-8005-973dbe521b83-kube-api-access-9wvlk\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.019236 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-config-data\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.019281 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-public-tls-certs\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.019328 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.020326 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a16414ec-df4c-4d62-8005-973dbe521b83-logs\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.025333 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.025345 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-config-data\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.027519 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-public-tls-certs\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.042256 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wvlk\" (UniqueName: \"kubernetes.io/projected/a16414ec-df4c-4d62-8005-973dbe521b83-kube-api-access-9wvlk\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.044354 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.152518 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b66a8ee-b2bb-41d3-8a32-498b297ed509" path="/var/lib/kubelet/pods/8b66a8ee-b2bb-41d3-8a32-498b297ed509/volumes" Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.182682 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:40 crc kubenswrapper[4780]: W1205 07:09:40.615901 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda16414ec_df4c_4d62_8005_973dbe521b83.slice/crio-c1b3de197478787b92dd4f159bbaeca6d4492a4d36a6b5daaa7595f2035bde36 WatchSource:0}: Error finding container c1b3de197478787b92dd4f159bbaeca6d4492a4d36a6b5daaa7595f2035bde36: Status 404 returned error can't find the container with id c1b3de197478787b92dd4f159bbaeca6d4492a4d36a6b5daaa7595f2035bde36 Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.619569 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.750621 4780 generic.go:334] "Generic (PLEG): container finished" podID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerID="b9f87c3eba2fae6410181f1cfccefbdd7a818f8f594f45c68f1af0f1deb78353" exitCode=0 Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.750724 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ec83d54-b768-4738-a2e4-9c77747d64e7","Type":"ContainerDied","Data":"b9f87c3eba2fae6410181f1cfccefbdd7a818f8f594f45c68f1af0f1deb78353"} Dec 05 07:09:40 crc kubenswrapper[4780]: I1205 07:09:40.752664 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a16414ec-df4c-4d62-8005-973dbe521b83","Type":"ContainerStarted","Data":"c1b3de197478787b92dd4f159bbaeca6d4492a4d36a6b5daaa7595f2035bde36"} Dec 05 07:09:41 crc kubenswrapper[4780]: I1205 07:09:41.004661 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 07:09:41 crc kubenswrapper[4780]: I1205 07:09:41.005353 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 07:09:41 crc kubenswrapper[4780]: I1205 07:09:41.025049 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:41 crc kubenswrapper[4780]: I1205 07:09:41.044331 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:41 crc kubenswrapper[4780]: I1205 07:09:41.767907 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a16414ec-df4c-4d62-8005-973dbe521b83","Type":"ContainerStarted","Data":"b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33"} Dec 05 07:09:41 crc kubenswrapper[4780]: I1205 07:09:41.767976 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a16414ec-df4c-4d62-8005-973dbe521b83","Type":"ContainerStarted","Data":"ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247"} Dec 05 07:09:41 crc kubenswrapper[4780]: I1205 07:09:41.789114 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:09:41 crc kubenswrapper[4780]: I1205 07:09:41.797489 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.797434752 podStartE2EDuration="2.797434752s" podCreationTimestamp="2025-12-05 07:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:41.786271631 +0000 UTC m=+1415.855787963" watchObservedRunningTime="2025-12-05 07:09:41.797434752 +0000 UTC m=+1415.866951094" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.011566 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-zhcvf"] Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.013282 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.015798 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.016009 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.020450 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.020491 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.024547 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zhcvf"] Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.160667 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f4sm\" (UniqueName: \"kubernetes.io/projected/e02b8260-4e63-48c3-b879-3840b95b60d5-kube-api-access-2f4sm\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.160762 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-config-data\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.160784 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-scripts\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.160915 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.262738 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f4sm\" (UniqueName: \"kubernetes.io/projected/e02b8260-4e63-48c3-b879-3840b95b60d5-kube-api-access-2f4sm\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.262828 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-config-data\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.262848 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-scripts\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.262988 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.268615 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.268690 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-config-data\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.276187 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-scripts\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.281976 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f4sm\" (UniqueName: \"kubernetes.io/projected/e02b8260-4e63-48c3-b879-3840b95b60d5-kube-api-access-2f4sm\") pod \"nova-cell1-cell-mapping-zhcvf\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.334711 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.766034 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zhcvf"] Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.768188 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.854442 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-w54fp"] Dec 05 07:09:42 crc kubenswrapper[4780]: I1205 07:09:42.854664 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" podUID="8fcbc6a3-d079-4d42-9761-572c3068dbb8" containerName="dnsmasq-dns" containerID="cri-o://d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97" gracePeriod=10 Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.330182 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.493057 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-svc\") pod \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.493123 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-sb\") pod \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.493208 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-nb\") pod \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.493327 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gk98\" (UniqueName: \"kubernetes.io/projected/8fcbc6a3-d079-4d42-9761-572c3068dbb8-kube-api-access-5gk98\") pod \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.493406 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-config\") pod \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.493476 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-swift-storage-0\") pod \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\" (UID: \"8fcbc6a3-d079-4d42-9761-572c3068dbb8\") " Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.503107 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fcbc6a3-d079-4d42-9761-572c3068dbb8-kube-api-access-5gk98" (OuterVolumeSpecName: "kube-api-access-5gk98") pod "8fcbc6a3-d079-4d42-9761-572c3068dbb8" (UID: "8fcbc6a3-d079-4d42-9761-572c3068dbb8"). InnerVolumeSpecName "kube-api-access-5gk98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.551747 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-config" (OuterVolumeSpecName: "config") pod "8fcbc6a3-d079-4d42-9761-572c3068dbb8" (UID: "8fcbc6a3-d079-4d42-9761-572c3068dbb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.557493 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8fcbc6a3-d079-4d42-9761-572c3068dbb8" (UID: "8fcbc6a3-d079-4d42-9761-572c3068dbb8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.560682 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8fcbc6a3-d079-4d42-9761-572c3068dbb8" (UID: "8fcbc6a3-d079-4d42-9761-572c3068dbb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.560849 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fcbc6a3-d079-4d42-9761-572c3068dbb8" (UID: "8fcbc6a3-d079-4d42-9761-572c3068dbb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.574567 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8fcbc6a3-d079-4d42-9761-572c3068dbb8" (UID: "8fcbc6a3-d079-4d42-9761-572c3068dbb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.595982 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.596019 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gk98\" (UniqueName: \"kubernetes.io/projected/8fcbc6a3-d079-4d42-9761-572c3068dbb8-kube-api-access-5gk98\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.596042 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.596058 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.596072 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.596086 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fcbc6a3-d079-4d42-9761-572c3068dbb8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.792869 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zhcvf" event={"ID":"e02b8260-4e63-48c3-b879-3840b95b60d5","Type":"ContainerStarted","Data":"90c21b5b82ee7130df8d8bcf05d3b02beac2d30b78c61ce4d5952c9cdff4a483"} Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.792975 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zhcvf" event={"ID":"e02b8260-4e63-48c3-b879-3840b95b60d5","Type":"ContainerStarted","Data":"fc44c2aa12f10a53f711160bf8bb5406834324e033ea70d5a47bcc138ce331e4"} Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.796222 4780 generic.go:334] "Generic (PLEG): container finished" podID="8fcbc6a3-d079-4d42-9761-572c3068dbb8" containerID="d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97" exitCode=0 Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.796255 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" event={"ID":"8fcbc6a3-d079-4d42-9761-572c3068dbb8","Type":"ContainerDied","Data":"d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97"} Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.796276 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" event={"ID":"8fcbc6a3-d079-4d42-9761-572c3068dbb8","Type":"ContainerDied","Data":"a7b96a4ce65ce03eb654dc202a60896368c9f6f95d0cb45fd9cfeb0d496cd3a6"} Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.796293 4780 scope.go:117] "RemoveContainer" containerID="d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.796435 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd87576bf-w54fp" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.816151 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-zhcvf" podStartSLOduration=2.816129397 podStartE2EDuration="2.816129397s" podCreationTimestamp="2025-12-05 07:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:43.805280114 +0000 UTC m=+1417.874796446" watchObservedRunningTime="2025-12-05 07:09:43.816129397 +0000 UTC m=+1417.885645729" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.832099 4780 scope.go:117] "RemoveContainer" containerID="508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.833755 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-w54fp"] Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.840788 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bd87576bf-w54fp"] Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.853675 4780 scope.go:117] "RemoveContainer" containerID="d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97" Dec 05 07:09:43 crc kubenswrapper[4780]: E1205 07:09:43.854463 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97\": container with ID starting with d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97 not found: ID does not exist" containerID="d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.854515 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97"} err="failed to get container status \"d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97\": rpc error: code = NotFound desc = could not find container \"d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97\": container with ID starting with d1b1bf47ae3d354850834dc8f639d252f17a1926fff49d1ca5525a38a2ea8b97 not found: ID does not exist" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.854543 4780 scope.go:117] "RemoveContainer" containerID="508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5" Dec 05 07:09:43 crc kubenswrapper[4780]: E1205 07:09:43.855398 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5\": container with ID starting with 508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5 not found: ID does not exist" containerID="508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5" Dec 05 07:09:43 crc kubenswrapper[4780]: I1205 07:09:43.855447 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5"} err="failed to get container status \"508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5\": rpc error: code = NotFound desc = could not find container \"508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5\": container with ID starting with 508ae9c023d6130d9f9921ee20bfc5e9e3ef230b28ced11dd1d9936fa3074fe5 not found: ID does not exist" Dec 05 07:09:44 crc kubenswrapper[4780]: I1205 07:09:44.149615 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fcbc6a3-d079-4d42-9761-572c3068dbb8" path="/var/lib/kubelet/pods/8fcbc6a3-d079-4d42-9761-572c3068dbb8/volumes" Dec 05 07:09:47 crc kubenswrapper[4780]: I1205 07:09:47.859412 4780 generic.go:334] "Generic (PLEG): container finished" podID="e02b8260-4e63-48c3-b879-3840b95b60d5" containerID="90c21b5b82ee7130df8d8bcf05d3b02beac2d30b78c61ce4d5952c9cdff4a483" exitCode=0 Dec 05 07:09:47 crc kubenswrapper[4780]: I1205 07:09:47.860078 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zhcvf" event={"ID":"e02b8260-4e63-48c3-b879-3840b95b60d5","Type":"ContainerDied","Data":"90c21b5b82ee7130df8d8bcf05d3b02beac2d30b78c61ce4d5952c9cdff4a483"} Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.254919 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.404992 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-config-data\") pod \"e02b8260-4e63-48c3-b879-3840b95b60d5\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.405048 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f4sm\" (UniqueName: \"kubernetes.io/projected/e02b8260-4e63-48c3-b879-3840b95b60d5-kube-api-access-2f4sm\") pod \"e02b8260-4e63-48c3-b879-3840b95b60d5\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.405276 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-scripts\") pod \"e02b8260-4e63-48c3-b879-3840b95b60d5\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.405333 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-combined-ca-bundle\") pod \"e02b8260-4e63-48c3-b879-3840b95b60d5\" (UID: \"e02b8260-4e63-48c3-b879-3840b95b60d5\") " Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.411016 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-scripts" (OuterVolumeSpecName: "scripts") pod "e02b8260-4e63-48c3-b879-3840b95b60d5" (UID: "e02b8260-4e63-48c3-b879-3840b95b60d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.411581 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02b8260-4e63-48c3-b879-3840b95b60d5-kube-api-access-2f4sm" (OuterVolumeSpecName: "kube-api-access-2f4sm") pod "e02b8260-4e63-48c3-b879-3840b95b60d5" (UID: "e02b8260-4e63-48c3-b879-3840b95b60d5"). InnerVolumeSpecName "kube-api-access-2f4sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.432247 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e02b8260-4e63-48c3-b879-3840b95b60d5" (UID: "e02b8260-4e63-48c3-b879-3840b95b60d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.433141 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-config-data" (OuterVolumeSpecName: "config-data") pod "e02b8260-4e63-48c3-b879-3840b95b60d5" (UID: "e02b8260-4e63-48c3-b879-3840b95b60d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.507205 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.507242 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f4sm\" (UniqueName: \"kubernetes.io/projected/e02b8260-4e63-48c3-b879-3840b95b60d5-kube-api-access-2f4sm\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.507254 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.507263 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02b8260-4e63-48c3-b879-3840b95b60d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.880233 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zhcvf" event={"ID":"e02b8260-4e63-48c3-b879-3840b95b60d5","Type":"ContainerDied","Data":"fc44c2aa12f10a53f711160bf8bb5406834324e033ea70d5a47bcc138ce331e4"} Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.880269 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc44c2aa12f10a53f711160bf8bb5406834324e033ea70d5a47bcc138ce331e4" Dec 05 07:09:49 crc kubenswrapper[4780]: I1205 07:09:49.880324 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zhcvf" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.063974 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.064287 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a16414ec-df4c-4d62-8005-973dbe521b83" containerName="nova-api-log" containerID="cri-o://ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247" gracePeriod=30 Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.064352 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a16414ec-df4c-4d62-8005-973dbe521b83" containerName="nova-api-api" containerID="cri-o://b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33" gracePeriod=30 Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.087094 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.087370 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b0139888-4d6b-4749-894c-46a370518e12" containerName="nova-scheduler-scheduler" containerID="cri-o://8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210" gracePeriod=30 Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.111551 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.111906 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerName="nova-metadata-log" containerID="cri-o://51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a" gracePeriod=30 Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.112030 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerName="nova-metadata-metadata" containerID="cri-o://e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac" gracePeriod=30 Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.660672 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.730222 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-combined-ca-bundle\") pod \"a16414ec-df4c-4d62-8005-973dbe521b83\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.730304 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a16414ec-df4c-4d62-8005-973dbe521b83-logs\") pod \"a16414ec-df4c-4d62-8005-973dbe521b83\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.730384 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-internal-tls-certs\") pod \"a16414ec-df4c-4d62-8005-973dbe521b83\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.730482 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wvlk\" (UniqueName: \"kubernetes.io/projected/a16414ec-df4c-4d62-8005-973dbe521b83-kube-api-access-9wvlk\") pod \"a16414ec-df4c-4d62-8005-973dbe521b83\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.730510 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-config-data\") pod \"a16414ec-df4c-4d62-8005-973dbe521b83\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.730552 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-public-tls-certs\") pod \"a16414ec-df4c-4d62-8005-973dbe521b83\" (UID: \"a16414ec-df4c-4d62-8005-973dbe521b83\") " Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.730797 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a16414ec-df4c-4d62-8005-973dbe521b83-logs" (OuterVolumeSpecName: "logs") pod "a16414ec-df4c-4d62-8005-973dbe521b83" (UID: "a16414ec-df4c-4d62-8005-973dbe521b83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.731046 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a16414ec-df4c-4d62-8005-973dbe521b83-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.739023 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a16414ec-df4c-4d62-8005-973dbe521b83-kube-api-access-9wvlk" (OuterVolumeSpecName: "kube-api-access-9wvlk") pod "a16414ec-df4c-4d62-8005-973dbe521b83" (UID: "a16414ec-df4c-4d62-8005-973dbe521b83"). InnerVolumeSpecName "kube-api-access-9wvlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.766090 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a16414ec-df4c-4d62-8005-973dbe521b83" (UID: "a16414ec-df4c-4d62-8005-973dbe521b83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.771312 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-config-data" (OuterVolumeSpecName: "config-data") pod "a16414ec-df4c-4d62-8005-973dbe521b83" (UID: "a16414ec-df4c-4d62-8005-973dbe521b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.803421 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a16414ec-df4c-4d62-8005-973dbe521b83" (UID: "a16414ec-df4c-4d62-8005-973dbe521b83"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.803661 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a16414ec-df4c-4d62-8005-973dbe521b83" (UID: "a16414ec-df4c-4d62-8005-973dbe521b83"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.833032 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.833065 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.833076 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wvlk\" (UniqueName: \"kubernetes.io/projected/a16414ec-df4c-4d62-8005-973dbe521b83-kube-api-access-9wvlk\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.833088 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.833096 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16414ec-df4c-4d62-8005-973dbe521b83-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.890129 4780 generic.go:334] "Generic (PLEG): container finished" podID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerID="51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a" exitCode=143 Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.890190 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e81085b-5e05-4f2a-8753-dff4325bb9ee","Type":"ContainerDied","Data":"51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a"} Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.893120 4780 generic.go:334] "Generic (PLEG): container finished" podID="a16414ec-df4c-4d62-8005-973dbe521b83" containerID="b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33" exitCode=0 Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.893148 4780 generic.go:334] "Generic (PLEG): container finished" podID="a16414ec-df4c-4d62-8005-973dbe521b83" containerID="ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247" exitCode=143 Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.893160 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a16414ec-df4c-4d62-8005-973dbe521b83","Type":"ContainerDied","Data":"b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33"} Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.893202 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.893226 4780 scope.go:117] "RemoveContainer" containerID="b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.893213 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a16414ec-df4c-4d62-8005-973dbe521b83","Type":"ContainerDied","Data":"ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247"} Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.893330 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a16414ec-df4c-4d62-8005-973dbe521b83","Type":"ContainerDied","Data":"c1b3de197478787b92dd4f159bbaeca6d4492a4d36a6b5daaa7595f2035bde36"} Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.917333 4780 scope.go:117] "RemoveContainer" containerID="ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.935236 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.950116 4780 scope.go:117] "RemoveContainer" containerID="b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33" Dec 05 07:09:50 crc kubenswrapper[4780]: E1205 07:09:50.951391 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33\": container with ID starting with b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33 not found: ID does not exist" containerID="b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.951675 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33"} err="failed to get container status \"b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33\": rpc error: code = NotFound desc = could not find container \"b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33\": container with ID starting with b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33 not found: ID does not exist" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.951705 4780 scope.go:117] "RemoveContainer" containerID="ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247" Dec 05 07:09:50 crc kubenswrapper[4780]: E1205 07:09:50.955113 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247\": container with ID starting with ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247 not found: ID does not exist" containerID="ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.955151 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247"} err="failed to get container status \"ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247\": rpc error: code = NotFound desc = could not find container \"ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247\": container with ID starting with ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247 not found: ID does not exist" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.955179 4780 scope.go:117] "RemoveContainer" containerID="b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.956394 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.961335 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33"} err="failed to get container status \"b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33\": rpc error: code = NotFound desc = could not find container \"b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33\": container with ID starting with b4adfda1961560f1a01c9ed27ad911d80f285dcc05492e4c4b6453685ef0cd33 not found: ID does not exist" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.961377 4780 scope.go:117] "RemoveContainer" containerID="ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.961650 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247"} err="failed to get container status \"ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247\": rpc error: code = NotFound desc = could not find container \"ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247\": container with ID starting with ad06c9a0337082534baec21dd0f2df3cf8d467fd9841ffdfb9de381415807247 not found: ID does not exist" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.967602 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:50 crc kubenswrapper[4780]: E1205 07:09:50.968119 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02b8260-4e63-48c3-b879-3840b95b60d5" containerName="nova-manage" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.968158 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02b8260-4e63-48c3-b879-3840b95b60d5" containerName="nova-manage" Dec 05 07:09:50 crc kubenswrapper[4780]: E1205 07:09:50.968178 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fcbc6a3-d079-4d42-9761-572c3068dbb8" containerName="dnsmasq-dns" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.968187 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fcbc6a3-d079-4d42-9761-572c3068dbb8" containerName="dnsmasq-dns" Dec 05 07:09:50 crc kubenswrapper[4780]: E1205 07:09:50.968212 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16414ec-df4c-4d62-8005-973dbe521b83" containerName="nova-api-log" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.968220 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16414ec-df4c-4d62-8005-973dbe521b83" containerName="nova-api-log" Dec 05 07:09:50 crc kubenswrapper[4780]: E1205 07:09:50.968229 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fcbc6a3-d079-4d42-9761-572c3068dbb8" containerName="init" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.968234 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fcbc6a3-d079-4d42-9761-572c3068dbb8" containerName="init" Dec 05 07:09:50 crc kubenswrapper[4780]: E1205 07:09:50.968252 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16414ec-df4c-4d62-8005-973dbe521b83" containerName="nova-api-api" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.968258 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16414ec-df4c-4d62-8005-973dbe521b83" containerName="nova-api-api" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.968467 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16414ec-df4c-4d62-8005-973dbe521b83" containerName="nova-api-api" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.968481 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16414ec-df4c-4d62-8005-973dbe521b83" containerName="nova-api-log" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.968495 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02b8260-4e63-48c3-b879-3840b95b60d5" containerName="nova-manage" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.968511 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fcbc6a3-d079-4d42-9761-572c3068dbb8" containerName="dnsmasq-dns" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.969551 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.971756 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.971783 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.971952 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 07:09:50 crc kubenswrapper[4780]: I1205 07:09:50.986284 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.036858 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8cr\" (UniqueName: \"kubernetes.io/projected/e0f8b72a-b08b-4c2f-98dc-242016b6f846-kube-api-access-wz8cr\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.036933 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-config-data\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.036976 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.037028 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.037394 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.037738 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f8b72a-b08b-4c2f-98dc-242016b6f846-logs\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.139009 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f8b72a-b08b-4c2f-98dc-242016b6f846-logs\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.139068 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz8cr\" (UniqueName: \"kubernetes.io/projected/e0f8b72a-b08b-4c2f-98dc-242016b6f846-kube-api-access-wz8cr\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.139095 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-config-data\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.139125 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.139166 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.139243 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.140581 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f8b72a-b08b-4c2f-98dc-242016b6f846-logs\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.154470 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.155195 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.156070 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-config-data\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.157511 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.158360 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz8cr\" (UniqueName: \"kubernetes.io/projected/e0f8b72a-b08b-4c2f-98dc-242016b6f846-kube-api-access-wz8cr\") pod \"nova-api-0\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.308378 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.767858 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:09:51 crc kubenswrapper[4780]: E1205 07:09:51.821070 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 07:09:51 crc kubenswrapper[4780]: E1205 07:09:51.822719 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 07:09:51 crc kubenswrapper[4780]: E1205 07:09:51.824721 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 07:09:51 crc kubenswrapper[4780]: E1205 07:09:51.824754 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b0139888-4d6b-4749-894c-46a370518e12" containerName="nova-scheduler-scheduler" Dec 05 07:09:51 crc kubenswrapper[4780]: I1205 07:09:51.903972 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0f8b72a-b08b-4c2f-98dc-242016b6f846","Type":"ContainerStarted","Data":"2bb25b38c07985e1cebe5e146e01280477a7eef46395ee15936ece96d7e47724"} Dec 05 07:09:52 crc kubenswrapper[4780]: I1205 07:09:52.149769 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a16414ec-df4c-4d62-8005-973dbe521b83" path="/var/lib/kubelet/pods/a16414ec-df4c-4d62-8005-973dbe521b83/volumes" Dec 05 07:09:52 crc kubenswrapper[4780]: I1205 07:09:52.913991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0f8b72a-b08b-4c2f-98dc-242016b6f846","Type":"ContainerStarted","Data":"5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2"} Dec 05 07:09:52 crc kubenswrapper[4780]: I1205 07:09:52.914033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0f8b72a-b08b-4c2f-98dc-242016b6f846","Type":"ContainerStarted","Data":"91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124"} Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.749271 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.777613 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.777585734 podStartE2EDuration="3.777585734s" podCreationTimestamp="2025-12-05 07:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:52.933995719 +0000 UTC m=+1427.003512051" watchObservedRunningTime="2025-12-05 07:09:53.777585734 +0000 UTC m=+1427.847102096" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.793849 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-combined-ca-bundle\") pod \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.795396 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-nova-metadata-tls-certs\") pod \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.795555 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fskd\" (UniqueName: \"kubernetes.io/projected/3e81085b-5e05-4f2a-8753-dff4325bb9ee-kube-api-access-4fskd\") pod \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.795675 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-config-data\") pod \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.795778 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e81085b-5e05-4f2a-8753-dff4325bb9ee-logs\") pod \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\" (UID: \"3e81085b-5e05-4f2a-8753-dff4325bb9ee\") " Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.797380 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e81085b-5e05-4f2a-8753-dff4325bb9ee-logs" (OuterVolumeSpecName: "logs") pod "3e81085b-5e05-4f2a-8753-dff4325bb9ee" (UID: "3e81085b-5e05-4f2a-8753-dff4325bb9ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.803535 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e81085b-5e05-4f2a-8753-dff4325bb9ee-kube-api-access-4fskd" (OuterVolumeSpecName: "kube-api-access-4fskd") pod "3e81085b-5e05-4f2a-8753-dff4325bb9ee" (UID: "3e81085b-5e05-4f2a-8753-dff4325bb9ee"). InnerVolumeSpecName "kube-api-access-4fskd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.839190 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e81085b-5e05-4f2a-8753-dff4325bb9ee" (UID: "3e81085b-5e05-4f2a-8753-dff4325bb9ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.848587 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-config-data" (OuterVolumeSpecName: "config-data") pod "3e81085b-5e05-4f2a-8753-dff4325bb9ee" (UID: "3e81085b-5e05-4f2a-8753-dff4325bb9ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.869090 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3e81085b-5e05-4f2a-8753-dff4325bb9ee" (UID: "3e81085b-5e05-4f2a-8753-dff4325bb9ee"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.898397 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.898442 4780 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.898454 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fskd\" (UniqueName: \"kubernetes.io/projected/3e81085b-5e05-4f2a-8753-dff4325bb9ee-kube-api-access-4fskd\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.898464 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e81085b-5e05-4f2a-8753-dff4325bb9ee-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.898474 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e81085b-5e05-4f2a-8753-dff4325bb9ee-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.928116 4780 generic.go:334] "Generic (PLEG): container finished" podID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerID="e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac" exitCode=0 Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.928160 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.928171 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e81085b-5e05-4f2a-8753-dff4325bb9ee","Type":"ContainerDied","Data":"e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac"} Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.928219 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3e81085b-5e05-4f2a-8753-dff4325bb9ee","Type":"ContainerDied","Data":"f9595aad309cc4f5de1d16f8c8339a1cde6de1b9096337ea7d09905778a7cbf7"} Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.928239 4780 scope.go:117] "RemoveContainer" containerID="e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.960283 4780 scope.go:117] "RemoveContainer" containerID="51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.962062 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.972022 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.987055 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:53 crc kubenswrapper[4780]: E1205 07:09:53.987438 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerName="nova-metadata-log" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.987457 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerName="nova-metadata-log" Dec 05 07:09:53 crc kubenswrapper[4780]: E1205 07:09:53.987482 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerName="nova-metadata-metadata" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.987489 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerName="nova-metadata-metadata" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.987672 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerName="nova-metadata-metadata" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.987693 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" containerName="nova-metadata-log" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.988650 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.990457 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 07:09:53 crc kubenswrapper[4780]: I1205 07:09:53.990481 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.012397 4780 scope.go:117] "RemoveContainer" containerID="e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac" Dec 05 07:09:54 crc kubenswrapper[4780]: E1205 07:09:54.013217 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac\": container with ID starting with e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac not found: ID does not exist" containerID="e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.013263 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac"} err="failed to get container status \"e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac\": rpc error: code = NotFound desc = could not find container \"e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac\": container with ID starting with e96971f5ef51bd5cbd25d7171b64b37fce39369f41a85da089f8a9b20a3f65ac not found: ID does not exist" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.013288 4780 scope.go:117] "RemoveContainer" containerID="51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a" Dec 05 07:09:54 crc kubenswrapper[4780]: E1205 07:09:54.013689 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a\": container with ID starting with 51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a not found: ID does not exist" containerID="51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.013753 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a"} err="failed to get container status \"51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a\": rpc error: code = NotFound desc = could not find container \"51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a\": container with ID starting with 51b7c24f0f6942d86a97b557d763e74f71f6cff27677603cdeb9e31a3aedcf3a not found: ID does not exist" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.019509 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.102382 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.102431 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.102632 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-config-data\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.102693 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-logs\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.102773 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s6tb\" (UniqueName: \"kubernetes.io/projected/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-kube-api-access-9s6tb\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.149460 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e81085b-5e05-4f2a-8753-dff4325bb9ee" path="/var/lib/kubelet/pods/3e81085b-5e05-4f2a-8753-dff4325bb9ee/volumes" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.204850 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.204916 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.204982 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-config-data\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.205019 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-logs\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.205084 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s6tb\" (UniqueName: \"kubernetes.io/projected/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-kube-api-access-9s6tb\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.206173 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-logs\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.209279 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.210232 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-config-data\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.210856 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.227359 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s6tb\" (UniqueName: \"kubernetes.io/projected/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-kube-api-access-9s6tb\") pod \"nova-metadata-0\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.316657 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.762642 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:09:54 crc kubenswrapper[4780]: W1205 07:09:54.762692 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc381b4ec_8b36_4a3d_8e07_dbbc3a021f11.slice/crio-33f982db2a13e4efcd01de3cd54fc7ade7eeba53042da8a022ed630d235143fe WatchSource:0}: Error finding container 33f982db2a13e4efcd01de3cd54fc7ade7eeba53042da8a022ed630d235143fe: Status 404 returned error can't find the container with id 33f982db2a13e4efcd01de3cd54fc7ade7eeba53042da8a022ed630d235143fe Dec 05 07:09:54 crc kubenswrapper[4780]: I1205 07:09:54.939172 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11","Type":"ContainerStarted","Data":"33f982db2a13e4efcd01de3cd54fc7ade7eeba53042da8a022ed630d235143fe"} Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.796665 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.834110 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-config-data\") pod \"b0139888-4d6b-4749-894c-46a370518e12\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.834481 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-combined-ca-bundle\") pod \"b0139888-4d6b-4749-894c-46a370518e12\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.834547 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbvfr\" (UniqueName: \"kubernetes.io/projected/b0139888-4d6b-4749-894c-46a370518e12-kube-api-access-gbvfr\") pod \"b0139888-4d6b-4749-894c-46a370518e12\" (UID: \"b0139888-4d6b-4749-894c-46a370518e12\") " Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.842675 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0139888-4d6b-4749-894c-46a370518e12-kube-api-access-gbvfr" (OuterVolumeSpecName: "kube-api-access-gbvfr") pod "b0139888-4d6b-4749-894c-46a370518e12" (UID: "b0139888-4d6b-4749-894c-46a370518e12"). InnerVolumeSpecName "kube-api-access-gbvfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.862271 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-config-data" (OuterVolumeSpecName: "config-data") pod "b0139888-4d6b-4749-894c-46a370518e12" (UID: "b0139888-4d6b-4749-894c-46a370518e12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.882437 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0139888-4d6b-4749-894c-46a370518e12" (UID: "b0139888-4d6b-4749-894c-46a370518e12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.937033 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.937435 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbvfr\" (UniqueName: \"kubernetes.io/projected/b0139888-4d6b-4749-894c-46a370518e12-kube-api-access-gbvfr\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.937448 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0139888-4d6b-4749-894c-46a370518e12-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.952780 4780 generic.go:334] "Generic (PLEG): container finished" podID="b0139888-4d6b-4749-894c-46a370518e12" containerID="8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210" exitCode=0 Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.952910 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0139888-4d6b-4749-894c-46a370518e12","Type":"ContainerDied","Data":"8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210"} Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.952940 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0139888-4d6b-4749-894c-46a370518e12","Type":"ContainerDied","Data":"5312fc8ca11f26def96e6acb76cff7b398dbb57ee26a272780c46ca55db82914"} Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.952981 4780 scope.go:117] "RemoveContainer" containerID="8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.953145 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.959239 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11","Type":"ContainerStarted","Data":"7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11"} Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.959297 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11","Type":"ContainerStarted","Data":"091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21"} Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.994408 4780 scope.go:117] "RemoveContainer" containerID="8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210" Dec 05 07:09:55 crc kubenswrapper[4780]: E1205 07:09:55.994890 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210\": container with ID starting with 8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210 not found: ID does not exist" containerID="8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.994935 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210"} err="failed to get container status \"8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210\": rpc error: code = NotFound desc = could not find container \"8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210\": container with ID starting with 8aeefd1cb394aec9eb343e6cef49d2da5a4fe616a1c9ce50dfb7a8fc52cc9210 not found: ID does not exist" Dec 05 07:09:55 crc kubenswrapper[4780]: I1205 07:09:55.996292 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.996280451 podStartE2EDuration="2.996280451s" podCreationTimestamp="2025-12-05 07:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:55.982426385 +0000 UTC m=+1430.051942707" watchObservedRunningTime="2025-12-05 07:09:55.996280451 +0000 UTC m=+1430.065796783" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.010147 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.023557 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.033217 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:56 crc kubenswrapper[4780]: E1205 07:09:56.034017 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0139888-4d6b-4749-894c-46a370518e12" containerName="nova-scheduler-scheduler" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.034050 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0139888-4d6b-4749-894c-46a370518e12" containerName="nova-scheduler-scheduler" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.034316 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0139888-4d6b-4749-894c-46a370518e12" containerName="nova-scheduler-scheduler" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.035552 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.038991 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.064290 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.141448 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.141494 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-config-data\") pod \"nova-scheduler-0\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.141524 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk8d9\" (UniqueName: \"kubernetes.io/projected/33af7252-1228-4051-bab0-cfcaee04fe1d-kube-api-access-sk8d9\") pod \"nova-scheduler-0\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.149468 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0139888-4d6b-4749-894c-46a370518e12" path="/var/lib/kubelet/pods/b0139888-4d6b-4749-894c-46a370518e12/volumes" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.243856 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.243940 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-config-data\") pod \"nova-scheduler-0\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.243977 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk8d9\" (UniqueName: \"kubernetes.io/projected/33af7252-1228-4051-bab0-cfcaee04fe1d-kube-api-access-sk8d9\") pod \"nova-scheduler-0\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.251024 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-config-data\") pod \"nova-scheduler-0\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.253233 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.270260 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk8d9\" (UniqueName: \"kubernetes.io/projected/33af7252-1228-4051-bab0-cfcaee04fe1d-kube-api-access-sk8d9\") pod \"nova-scheduler-0\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.352819 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:09:56 crc kubenswrapper[4780]: W1205 07:09:56.861755 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33af7252_1228_4051_bab0_cfcaee04fe1d.slice/crio-a01fbb63151c66d8c336572ebcdd5435b5db45c085006249ad45fb44dc0f5052 WatchSource:0}: Error finding container a01fbb63151c66d8c336572ebcdd5435b5db45c085006249ad45fb44dc0f5052: Status 404 returned error can't find the container with id a01fbb63151c66d8c336572ebcdd5435b5db45c085006249ad45fb44dc0f5052 Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.861913 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:09:56 crc kubenswrapper[4780]: I1205 07:09:56.975296 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"33af7252-1228-4051-bab0-cfcaee04fe1d","Type":"ContainerStarted","Data":"a01fbb63151c66d8c336572ebcdd5435b5db45c085006249ad45fb44dc0f5052"} Dec 05 07:09:57 crc kubenswrapper[4780]: I1205 07:09:57.990769 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"33af7252-1228-4051-bab0-cfcaee04fe1d","Type":"ContainerStarted","Data":"e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20"} Dec 05 07:09:58 crc kubenswrapper[4780]: I1205 07:09:58.026719 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.02669456 podStartE2EDuration="2.02669456s" podCreationTimestamp="2025-12-05 07:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:09:58.01115478 +0000 UTC m=+1432.080671112" watchObservedRunningTime="2025-12-05 07:09:58.02669456 +0000 UTC m=+1432.096210912" Dec 05 07:09:59 crc kubenswrapper[4780]: I1205 07:09:59.317498 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 07:09:59 crc kubenswrapper[4780]: I1205 07:09:59.317614 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 07:09:59 crc kubenswrapper[4780]: I1205 07:09:59.908314 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:09:59 crc kubenswrapper[4780]: I1205 07:09:59.908977 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:10:01 crc kubenswrapper[4780]: I1205 07:10:01.309343 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 07:10:01 crc kubenswrapper[4780]: I1205 07:10:01.309403 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 07:10:01 crc kubenswrapper[4780]: I1205 07:10:01.354414 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 07:10:02 crc kubenswrapper[4780]: I1205 07:10:02.329100 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 07:10:02 crc kubenswrapper[4780]: I1205 07:10:02.329416 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 07:10:03 crc kubenswrapper[4780]: I1205 07:10:03.102831 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 05 07:10:04 crc kubenswrapper[4780]: I1205 07:10:04.317307 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 07:10:04 crc kubenswrapper[4780]: I1205 07:10:04.317379 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 07:10:05 crc kubenswrapper[4780]: I1205 07:10:05.329030 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 07:10:05 crc kubenswrapper[4780]: I1205 07:10:05.329095 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 07:10:06 crc kubenswrapper[4780]: I1205 07:10:06.354196 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 07:10:06 crc kubenswrapper[4780]: I1205 07:10:06.382189 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 07:10:07 crc kubenswrapper[4780]: I1205 07:10:07.095936 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.101787 4780 generic.go:334] "Generic (PLEG): container finished" podID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerID="16c73fa0f2a5f44cf47ed2c1b9a24fbd61232e5c1ea4917a71340ce36c55db3d" exitCode=137 Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.101835 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ec83d54-b768-4738-a2e4-9c77747d64e7","Type":"ContainerDied","Data":"16c73fa0f2a5f44cf47ed2c1b9a24fbd61232e5c1ea4917a71340ce36c55db3d"} Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.102231 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ec83d54-b768-4738-a2e4-9c77747d64e7","Type":"ContainerDied","Data":"7ac1da3faa8e25e3eb3eea32c1990f6eee81fceb00b175a88ee34e2640090fcd"} Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.102249 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ac1da3faa8e25e3eb3eea32c1990f6eee81fceb00b175a88ee34e2640090fcd" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.137863 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.198894 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-config-data\") pod \"9ec83d54-b768-4738-a2e4-9c77747d64e7\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.198947 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-ceilometer-tls-certs\") pod \"9ec83d54-b768-4738-a2e4-9c77747d64e7\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.198976 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-log-httpd\") pod \"9ec83d54-b768-4738-a2e4-9c77747d64e7\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.199056 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hql4k\" (UniqueName: \"kubernetes.io/projected/9ec83d54-b768-4738-a2e4-9c77747d64e7-kube-api-access-hql4k\") pod \"9ec83d54-b768-4738-a2e4-9c77747d64e7\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.199079 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-scripts\") pod \"9ec83d54-b768-4738-a2e4-9c77747d64e7\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.199150 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-combined-ca-bundle\") pod \"9ec83d54-b768-4738-a2e4-9c77747d64e7\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.199190 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-run-httpd\") pod \"9ec83d54-b768-4738-a2e4-9c77747d64e7\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.199240 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-sg-core-conf-yaml\") pod \"9ec83d54-b768-4738-a2e4-9c77747d64e7\" (UID: \"9ec83d54-b768-4738-a2e4-9c77747d64e7\") " Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.200655 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9ec83d54-b768-4738-a2e4-9c77747d64e7" (UID: "9ec83d54-b768-4738-a2e4-9c77747d64e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.201458 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9ec83d54-b768-4738-a2e4-9c77747d64e7" (UID: "9ec83d54-b768-4738-a2e4-9c77747d64e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.204854 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-scripts" (OuterVolumeSpecName: "scripts") pod "9ec83d54-b768-4738-a2e4-9c77747d64e7" (UID: "9ec83d54-b768-4738-a2e4-9c77747d64e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.205249 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec83d54-b768-4738-a2e4-9c77747d64e7-kube-api-access-hql4k" (OuterVolumeSpecName: "kube-api-access-hql4k") pod "9ec83d54-b768-4738-a2e4-9c77747d64e7" (UID: "9ec83d54-b768-4738-a2e4-9c77747d64e7"). InnerVolumeSpecName "kube-api-access-hql4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.233051 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9ec83d54-b768-4738-a2e4-9c77747d64e7" (UID: "9ec83d54-b768-4738-a2e4-9c77747d64e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.254377 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9ec83d54-b768-4738-a2e4-9c77747d64e7" (UID: "9ec83d54-b768-4738-a2e4-9c77747d64e7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.276676 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ec83d54-b768-4738-a2e4-9c77747d64e7" (UID: "9ec83d54-b768-4738-a2e4-9c77747d64e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.289421 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-config-data" (OuterVolumeSpecName: "config-data") pod "9ec83d54-b768-4738-a2e4-9c77747d64e7" (UID: "9ec83d54-b768-4738-a2e4-9c77747d64e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.300958 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.300984 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.300994 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.301003 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hql4k\" (UniqueName: \"kubernetes.io/projected/9ec83d54-b768-4738-a2e4-9c77747d64e7-kube-api-access-hql4k\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.301011 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.301019 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.301027 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ec83d54-b768-4738-a2e4-9c77747d64e7-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:09 crc kubenswrapper[4780]: I1205 07:10:09.301036 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ec83d54-b768-4738-a2e4-9c77747d64e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.113290 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.164072 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.180115 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.193448 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:10:10 crc kubenswrapper[4780]: E1205 07:10:10.194031 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="ceilometer-notification-agent" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.194061 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="ceilometer-notification-agent" Dec 05 07:10:10 crc kubenswrapper[4780]: E1205 07:10:10.194081 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="proxy-httpd" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.194089 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="proxy-httpd" Dec 05 07:10:10 crc kubenswrapper[4780]: E1205 07:10:10.194131 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="ceilometer-central-agent" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.194140 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="ceilometer-central-agent" Dec 05 07:10:10 crc kubenswrapper[4780]: E1205 07:10:10.194156 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="sg-core" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.194163 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="sg-core" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.194435 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="sg-core" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.194458 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="ceilometer-central-agent" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.194471 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="ceilometer-notification-agent" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.194480 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" containerName="proxy-httpd" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.196785 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.200135 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.200465 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.200939 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.209289 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.220573 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-log-httpd\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.220822 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.220908 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-scripts\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.220974 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-run-httpd\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.221010 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-config-data\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.221070 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.221464 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.221851 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xcsm\" (UniqueName: \"kubernetes.io/projected/5aca675e-bb76-4588-b998-c26393dd5ab6-kube-api-access-7xcsm\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.323619 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.324057 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.324188 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xcsm\" (UniqueName: \"kubernetes.io/projected/5aca675e-bb76-4588-b998-c26393dd5ab6-kube-api-access-7xcsm\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.324225 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-log-httpd\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.324327 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.328759 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-scripts\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.328853 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-run-httpd\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.328927 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-config-data\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.325862 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-log-httpd\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.338740 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.339039 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.339379 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.340476 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-run-httpd\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.341394 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xcsm\" (UniqueName: \"kubernetes.io/projected/5aca675e-bb76-4588-b998-c26393dd5ab6-kube-api-access-7xcsm\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.342699 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-config-data\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.343642 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-scripts\") pod \"ceilometer-0\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.522048 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:10:10 crc kubenswrapper[4780]: I1205 07:10:10.962550 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:10:10 crc kubenswrapper[4780]: W1205 07:10:10.964145 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aca675e_bb76_4588_b998_c26393dd5ab6.slice/crio-eeee9696dd1264631c93029004f9100486f9335885b9e2868cbf7cad6b9f06ae WatchSource:0}: Error finding container eeee9696dd1264631c93029004f9100486f9335885b9e2868cbf7cad6b9f06ae: Status 404 returned error can't find the container with id eeee9696dd1264631c93029004f9100486f9335885b9e2868cbf7cad6b9f06ae Dec 05 07:10:11 crc kubenswrapper[4780]: I1205 07:10:11.123014 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aca675e-bb76-4588-b998-c26393dd5ab6","Type":"ContainerStarted","Data":"eeee9696dd1264631c93029004f9100486f9335885b9e2868cbf7cad6b9f06ae"} Dec 05 07:10:11 crc kubenswrapper[4780]: I1205 07:10:11.317639 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 07:10:11 crc kubenswrapper[4780]: I1205 07:10:11.318084 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 07:10:11 crc kubenswrapper[4780]: I1205 07:10:11.319521 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 07:10:11 crc kubenswrapper[4780]: I1205 07:10:11.327145 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 07:10:12 crc kubenswrapper[4780]: I1205 07:10:12.133565 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aca675e-bb76-4588-b998-c26393dd5ab6","Type":"ContainerStarted","Data":"282e85d0f2ea9e2278d2658c562e6fa9b7d5cb1b13122f0ecb2e5ba8c5f54666"} Dec 05 07:10:12 crc kubenswrapper[4780]: I1205 07:10:12.133995 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 07:10:12 crc kubenswrapper[4780]: I1205 07:10:12.149202 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ec83d54-b768-4738-a2e4-9c77747d64e7" path="/var/lib/kubelet/pods/9ec83d54-b768-4738-a2e4-9c77747d64e7/volumes" Dec 05 07:10:12 crc kubenswrapper[4780]: I1205 07:10:12.150065 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 07:10:13 crc kubenswrapper[4780]: I1205 07:10:13.148139 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aca675e-bb76-4588-b998-c26393dd5ab6","Type":"ContainerStarted","Data":"7b6ccff69e702c06122f20efccc590a8f63a94c28204c42d45a3128606dcedcb"} Dec 05 07:10:13 crc kubenswrapper[4780]: I1205 07:10:13.148759 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aca675e-bb76-4588-b998-c26393dd5ab6","Type":"ContainerStarted","Data":"6729e14dde9f78be13ae40bdda9e3ae569261b8bcd8c18d065c49f17af80f082"} Dec 05 07:10:14 crc kubenswrapper[4780]: I1205 07:10:14.322285 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 07:10:14 crc kubenswrapper[4780]: I1205 07:10:14.323767 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 07:10:14 crc kubenswrapper[4780]: I1205 07:10:14.329912 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 07:10:15 crc kubenswrapper[4780]: I1205 07:10:15.177558 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aca675e-bb76-4588-b998-c26393dd5ab6","Type":"ContainerStarted","Data":"3793732da62f19e7a1e8b9f03d99576883cb03ca232244237185b399ee3c2f70"} Dec 05 07:10:15 crc kubenswrapper[4780]: I1205 07:10:15.178145 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 07:10:15 crc kubenswrapper[4780]: I1205 07:10:15.184236 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 07:10:15 crc kubenswrapper[4780]: I1205 07:10:15.220758 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.031397425 podStartE2EDuration="5.220738675s" podCreationTimestamp="2025-12-05 07:10:10 +0000 UTC" firstStartedPulling="2025-12-05 07:10:10.966313505 +0000 UTC m=+1445.035829847" lastFinishedPulling="2025-12-05 07:10:14.155654765 +0000 UTC m=+1448.225171097" observedRunningTime="2025-12-05 07:10:15.201307529 +0000 UTC m=+1449.270823871" watchObservedRunningTime="2025-12-05 07:10:15.220738675 +0000 UTC m=+1449.290255007" Dec 05 07:10:29 crc kubenswrapper[4780]: I1205 07:10:29.908086 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:10:29 crc kubenswrapper[4780]: I1205 07:10:29.908733 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:10:29 crc kubenswrapper[4780]: I1205 07:10:29.908795 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:10:29 crc kubenswrapper[4780]: I1205 07:10:29.909672 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9dc2d92a1d6ba1ee75bf54b5eb7456372ba33add1817df5a2c1354bbca5e757"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:10:29 crc kubenswrapper[4780]: I1205 07:10:29.909733 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://d9dc2d92a1d6ba1ee75bf54b5eb7456372ba33add1817df5a2c1354bbca5e757" gracePeriod=600 Dec 05 07:10:30 crc kubenswrapper[4780]: I1205 07:10:30.322191 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="d9dc2d92a1d6ba1ee75bf54b5eb7456372ba33add1817df5a2c1354bbca5e757" exitCode=0 Dec 05 07:10:30 crc kubenswrapper[4780]: I1205 07:10:30.322236 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"d9dc2d92a1d6ba1ee75bf54b5eb7456372ba33add1817df5a2c1354bbca5e757"} Dec 05 07:10:30 crc kubenswrapper[4780]: I1205 07:10:30.322581 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a"} Dec 05 07:10:30 crc kubenswrapper[4780]: I1205 07:10:30.322609 4780 scope.go:117] "RemoveContainer" containerID="37d5186fb5eae115758d67047412cfc1def8a21875c148dddc28958b2a44062b" Dec 05 07:10:40 crc kubenswrapper[4780]: I1205 07:10:40.530037 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 07:10:45 crc kubenswrapper[4780]: I1205 07:10:45.920873 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6dt2m"] Dec 05 07:10:45 crc kubenswrapper[4780]: I1205 07:10:45.923830 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:45 crc kubenswrapper[4780]: I1205 07:10:45.931805 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dt2m"] Dec 05 07:10:45 crc kubenswrapper[4780]: I1205 07:10:45.974854 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-utilities\") pod \"redhat-marketplace-6dt2m\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:45 crc kubenswrapper[4780]: I1205 07:10:45.975012 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-catalog-content\") pod \"redhat-marketplace-6dt2m\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:45 crc kubenswrapper[4780]: I1205 07:10:45.975143 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rf7q\" (UniqueName: \"kubernetes.io/projected/5dca305d-8790-404f-bd29-7102f11b8eff-kube-api-access-7rf7q\") pod \"redhat-marketplace-6dt2m\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:46 crc kubenswrapper[4780]: I1205 07:10:46.076508 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-utilities\") pod \"redhat-marketplace-6dt2m\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:46 crc kubenswrapper[4780]: I1205 07:10:46.076585 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-catalog-content\") pod \"redhat-marketplace-6dt2m\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:46 crc kubenswrapper[4780]: I1205 07:10:46.076635 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rf7q\" (UniqueName: \"kubernetes.io/projected/5dca305d-8790-404f-bd29-7102f11b8eff-kube-api-access-7rf7q\") pod \"redhat-marketplace-6dt2m\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:46 crc kubenswrapper[4780]: I1205 07:10:46.077080 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-utilities\") pod \"redhat-marketplace-6dt2m\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:46 crc kubenswrapper[4780]: I1205 07:10:46.077177 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-catalog-content\") pod \"redhat-marketplace-6dt2m\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:46 crc kubenswrapper[4780]: I1205 07:10:46.106031 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rf7q\" (UniqueName: \"kubernetes.io/projected/5dca305d-8790-404f-bd29-7102f11b8eff-kube-api-access-7rf7q\") pod \"redhat-marketplace-6dt2m\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:46 crc kubenswrapper[4780]: I1205 07:10:46.252627 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:46 crc kubenswrapper[4780]: I1205 07:10:46.768813 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dt2m"] Dec 05 07:10:47 crc kubenswrapper[4780]: I1205 07:10:47.478424 4780 generic.go:334] "Generic (PLEG): container finished" podID="5dca305d-8790-404f-bd29-7102f11b8eff" containerID="f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f" exitCode=0 Dec 05 07:10:47 crc kubenswrapper[4780]: I1205 07:10:47.478703 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dt2m" event={"ID":"5dca305d-8790-404f-bd29-7102f11b8eff","Type":"ContainerDied","Data":"f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f"} Dec 05 07:10:47 crc kubenswrapper[4780]: I1205 07:10:47.478724 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dt2m" event={"ID":"5dca305d-8790-404f-bd29-7102f11b8eff","Type":"ContainerStarted","Data":"0d85b82c0c07b4705745bb57b12f6ac317fa7de3465b4282b61adc9bf72ae92b"} Dec 05 07:10:49 crc kubenswrapper[4780]: I1205 07:10:49.504936 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dt2m" event={"ID":"5dca305d-8790-404f-bd29-7102f11b8eff","Type":"ContainerStarted","Data":"0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0"} Dec 05 07:10:50 crc kubenswrapper[4780]: E1205 07:10:50.152671 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dca305d_8790_404f_bd29_7102f11b8eff.slice/crio-conmon-0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0.scope\": RecentStats: unable to find data in memory cache]" Dec 05 07:10:50 crc kubenswrapper[4780]: I1205 07:10:50.516374 4780 generic.go:334] "Generic (PLEG): container finished" podID="5dca305d-8790-404f-bd29-7102f11b8eff" containerID="0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0" exitCode=0 Dec 05 07:10:50 crc kubenswrapper[4780]: I1205 07:10:50.516468 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dt2m" event={"ID":"5dca305d-8790-404f-bd29-7102f11b8eff","Type":"ContainerDied","Data":"0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0"} Dec 05 07:10:51 crc kubenswrapper[4780]: I1205 07:10:51.528517 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dt2m" event={"ID":"5dca305d-8790-404f-bd29-7102f11b8eff","Type":"ContainerStarted","Data":"c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec"} Dec 05 07:10:56 crc kubenswrapper[4780]: I1205 07:10:56.253553 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:56 crc kubenswrapper[4780]: I1205 07:10:56.254195 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:56 crc kubenswrapper[4780]: I1205 07:10:56.314298 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:56 crc kubenswrapper[4780]: I1205 07:10:56.339138 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6dt2m" podStartSLOduration=7.890687707 podStartE2EDuration="11.339114711s" podCreationTimestamp="2025-12-05 07:10:45 +0000 UTC" firstStartedPulling="2025-12-05 07:10:47.480131283 +0000 UTC m=+1481.549647615" lastFinishedPulling="2025-12-05 07:10:50.928558287 +0000 UTC m=+1484.998074619" observedRunningTime="2025-12-05 07:10:51.550380898 +0000 UTC m=+1485.619897240" watchObservedRunningTime="2025-12-05 07:10:56.339114711 +0000 UTC m=+1490.408631043" Dec 05 07:10:56 crc kubenswrapper[4780]: I1205 07:10:56.625136 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:56 crc kubenswrapper[4780]: I1205 07:10:56.674497 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dt2m"] Dec 05 07:10:58 crc kubenswrapper[4780]: I1205 07:10:58.604662 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6dt2m" podUID="5dca305d-8790-404f-bd29-7102f11b8eff" containerName="registry-server" containerID="cri-o://c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec" gracePeriod=2 Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.048858 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.218484 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-utilities\") pod \"5dca305d-8790-404f-bd29-7102f11b8eff\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.218694 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-catalog-content\") pod \"5dca305d-8790-404f-bd29-7102f11b8eff\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.218725 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rf7q\" (UniqueName: \"kubernetes.io/projected/5dca305d-8790-404f-bd29-7102f11b8eff-kube-api-access-7rf7q\") pod \"5dca305d-8790-404f-bd29-7102f11b8eff\" (UID: \"5dca305d-8790-404f-bd29-7102f11b8eff\") " Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.219602 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-utilities" (OuterVolumeSpecName: "utilities") pod "5dca305d-8790-404f-bd29-7102f11b8eff" (UID: "5dca305d-8790-404f-bd29-7102f11b8eff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.227076 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dca305d-8790-404f-bd29-7102f11b8eff-kube-api-access-7rf7q" (OuterVolumeSpecName: "kube-api-access-7rf7q") pod "5dca305d-8790-404f-bd29-7102f11b8eff" (UID: "5dca305d-8790-404f-bd29-7102f11b8eff"). InnerVolumeSpecName "kube-api-access-7rf7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.239799 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dca305d-8790-404f-bd29-7102f11b8eff" (UID: "5dca305d-8790-404f-bd29-7102f11b8eff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.320795 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.320827 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dca305d-8790-404f-bd29-7102f11b8eff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.320840 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rf7q\" (UniqueName: \"kubernetes.io/projected/5dca305d-8790-404f-bd29-7102f11b8eff-kube-api-access-7rf7q\") on node \"crc\" DevicePath \"\"" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.616385 4780 generic.go:334] "Generic (PLEG): container finished" podID="5dca305d-8790-404f-bd29-7102f11b8eff" containerID="c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec" exitCode=0 Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.616427 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dt2m" event={"ID":"5dca305d-8790-404f-bd29-7102f11b8eff","Type":"ContainerDied","Data":"c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec"} Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.616457 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dt2m" event={"ID":"5dca305d-8790-404f-bd29-7102f11b8eff","Type":"ContainerDied","Data":"0d85b82c0c07b4705745bb57b12f6ac317fa7de3465b4282b61adc9bf72ae92b"} Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.616463 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dt2m" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.616493 4780 scope.go:117] "RemoveContainer" containerID="c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.638824 4780 scope.go:117] "RemoveContainer" containerID="0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.654749 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dt2m"] Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.662338 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dt2m"] Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.686175 4780 scope.go:117] "RemoveContainer" containerID="f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.722007 4780 scope.go:117] "RemoveContainer" containerID="c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec" Dec 05 07:10:59 crc kubenswrapper[4780]: E1205 07:10:59.722851 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec\": container with ID starting with c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec not found: ID does not exist" containerID="c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.722959 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec"} err="failed to get container status \"c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec\": rpc error: code = NotFound desc = could not find container \"c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec\": container with ID starting with c10646cccedb879402dce11ed907570d2e5b0f94e3ec955f3d9a88df5b8782ec not found: ID does not exist" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.722981 4780 scope.go:117] "RemoveContainer" containerID="0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0" Dec 05 07:10:59 crc kubenswrapper[4780]: E1205 07:10:59.723803 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0\": container with ID starting with 0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0 not found: ID does not exist" containerID="0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.723863 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0"} err="failed to get container status \"0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0\": rpc error: code = NotFound desc = could not find container \"0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0\": container with ID starting with 0889f04a3080238a3e1409a89cbcf19d0248b5fb1553e4b3e4344af02c6188a0 not found: ID does not exist" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.723935 4780 scope.go:117] "RemoveContainer" containerID="f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f" Dec 05 07:10:59 crc kubenswrapper[4780]: E1205 07:10:59.724259 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f\": container with ID starting with f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f not found: ID does not exist" containerID="f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f" Dec 05 07:10:59 crc kubenswrapper[4780]: I1205 07:10:59.724303 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f"} err="failed to get container status \"f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f\": rpc error: code = NotFound desc = could not find container \"f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f\": container with ID starting with f59cb77702799032e87c5222b64be2fd860e4cdccb5a989d7ebff53ccc0ee46f not found: ID does not exist" Dec 05 07:11:00 crc kubenswrapper[4780]: I1205 07:11:00.149071 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dca305d-8790-404f-bd29-7102f11b8eff" path="/var/lib/kubelet/pods/5dca305d-8790-404f-bd29-7102f11b8eff/volumes" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.054272 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.054775 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="65736cb4-25b2-402e-8dfe-d00b218a274b" containerName="openstackclient" containerID="cri-o://b9808fa835d43d815f703095686bccb9a6eedb6aab78ee5755aeddb342d50d7a" gracePeriod=2 Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.068430 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.424592 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.450834 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance4824-account-delete-4st4x"] Dec 05 07:11:01 crc kubenswrapper[4780]: E1205 07:11:01.455234 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65736cb4-25b2-402e-8dfe-d00b218a274b" containerName="openstackclient" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.455265 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="65736cb4-25b2-402e-8dfe-d00b218a274b" containerName="openstackclient" Dec 05 07:11:01 crc kubenswrapper[4780]: E1205 07:11:01.455289 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dca305d-8790-404f-bd29-7102f11b8eff" containerName="extract-content" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.455295 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dca305d-8790-404f-bd29-7102f11b8eff" containerName="extract-content" Dec 05 07:11:01 crc kubenswrapper[4780]: E1205 07:11:01.455309 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dca305d-8790-404f-bd29-7102f11b8eff" containerName="extract-utilities" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.455315 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dca305d-8790-404f-bd29-7102f11b8eff" containerName="extract-utilities" Dec 05 07:11:01 crc kubenswrapper[4780]: E1205 07:11:01.455343 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dca305d-8790-404f-bd29-7102f11b8eff" containerName="registry-server" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.455349 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dca305d-8790-404f-bd29-7102f11b8eff" containerName="registry-server" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.455526 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="65736cb4-25b2-402e-8dfe-d00b218a274b" containerName="openstackclient" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.455545 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dca305d-8790-404f-bd29-7102f11b8eff" containerName="registry-server" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.456199 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance4824-account-delete-4st4x" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.497191 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts\") pod \"glance4824-account-delete-4st4x\" (UID: \"202ef989-0cbf-4120-8621-11201cfe3d64\") " pod="openstack/glance4824-account-delete-4st4x" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.497503 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvnrt\" (UniqueName: \"kubernetes.io/projected/202ef989-0cbf-4120-8621-11201cfe3d64-kube-api-access-kvnrt\") pod \"glance4824-account-delete-4st4x\" (UID: \"202ef989-0cbf-4120-8621-11201cfe3d64\") " pod="openstack/glance4824-account-delete-4st4x" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.514058 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance4824-account-delete-4st4x"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.592850 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.593537 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" containerName="openstack-network-exporter" containerID="cri-o://d7f5fd7515ed34f074ee09f78ddd69456ef45c158b4ca80becb54c10be0aea32" gracePeriod=300 Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.603032 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts\") pod \"glance4824-account-delete-4st4x\" (UID: \"202ef989-0cbf-4120-8621-11201cfe3d64\") " pod="openstack/glance4824-account-delete-4st4x" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.603161 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvnrt\" (UniqueName: \"kubernetes.io/projected/202ef989-0cbf-4120-8621-11201cfe3d64-kube-api-access-kvnrt\") pod \"glance4824-account-delete-4st4x\" (UID: \"202ef989-0cbf-4120-8621-11201cfe3d64\") " pod="openstack/glance4824-account-delete-4st4x" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.606754 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts\") pod \"glance4824-account-delete-4st4x\" (UID: \"202ef989-0cbf-4120-8621-11201cfe3d64\") " pod="openstack/glance4824-account-delete-4st4x" Dec 05 07:11:01 crc kubenswrapper[4780]: E1205 07:11:01.608218 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 05 07:11:01 crc kubenswrapper[4780]: E1205 07:11:01.608265 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data podName:1e6efd4f-660c-44e1-bf69-8b1cec6a6e85 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:02.108249738 +0000 UTC m=+1496.177766070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data") pod "rabbitmq-cell1-server-0" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85") : configmap "rabbitmq-cell1-config-data" not found Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.664348 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutrona927-account-delete-5chq6"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.665859 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutrona927-account-delete-5chq6" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.684519 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutrona927-account-delete-5chq6"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.707762 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvnrt\" (UniqueName: \"kubernetes.io/projected/202ef989-0cbf-4120-8621-11201cfe3d64-kube-api-access-kvnrt\") pod \"glance4824-account-delete-4st4x\" (UID: \"202ef989-0cbf-4120-8621-11201cfe3d64\") " pod="openstack/glance4824-account-delete-4st4x" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.795085 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bsjtr"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.806508 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts\") pod \"neutrona927-account-delete-5chq6\" (UID: \"574be54a-bbce-4f37-93b1-c9de6f1d0f4e\") " pod="openstack/neutrona927-account-delete-5chq6" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.806604 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq7gs\" (UniqueName: \"kubernetes.io/projected/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-kube-api-access-bq7gs\") pod \"neutrona927-account-delete-5chq6\" (UID: \"574be54a-bbce-4f37-93b1-c9de6f1d0f4e\") " pod="openstack/neutrona927-account-delete-5chq6" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.818721 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance4824-account-delete-4st4x" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.819329 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bsjtr"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.832936 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.833190 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerName="ovn-northd" containerID="cri-o://70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294" gracePeriod=30 Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.833605 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerName="openstack-network-exporter" containerID="cri-o://9654c7269b622680dcb56608c38fc0a232f404664727ed94cb9dd7668100f74a" gracePeriod=30 Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.921555 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican796f-account-delete-h5ds7"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.922768 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican796f-account-delete-h5ds7" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.935224 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts\") pod \"neutrona927-account-delete-5chq6\" (UID: \"574be54a-bbce-4f37-93b1-c9de6f1d0f4e\") " pod="openstack/neutrona927-account-delete-5chq6" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.935576 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq7gs\" (UniqueName: \"kubernetes.io/projected/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-kube-api-access-bq7gs\") pod \"neutrona927-account-delete-5chq6\" (UID: \"574be54a-bbce-4f37-93b1-c9de6f1d0f4e\") " pod="openstack/neutrona927-account-delete-5chq6" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.949058 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts\") pod \"neutrona927-account-delete-5chq6\" (UID: \"574be54a-bbce-4f37-93b1-c9de6f1d0f4e\") " pod="openstack/neutrona927-account-delete-5chq6" Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.977429 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican796f-account-delete-h5ds7"] Dec 05 07:11:01 crc kubenswrapper[4780]: I1205 07:11:01.998484 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq7gs\" (UniqueName: \"kubernetes.io/projected/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-kube-api-access-bq7gs\") pod \"neutrona927-account-delete-5chq6\" (UID: \"574be54a-bbce-4f37-93b1-c9de6f1d0f4e\") " pod="openstack/neutrona927-account-delete-5chq6" Dec 05 07:11:02 crc kubenswrapper[4780]: I1205 07:11:02.037107 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvgj\" (UniqueName: \"kubernetes.io/projected/72765495-c470-41a5-b5a7-423025bdd6a7-kube-api-access-vkvgj\") pod \"barbican796f-account-delete-h5ds7\" (UID: \"72765495-c470-41a5-b5a7-423025bdd6a7\") " pod="openstack/barbican796f-account-delete-h5ds7" Dec 05 07:11:02 crc kubenswrapper[4780]: I1205 07:11:02.037194 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts\") pod \"barbican796f-account-delete-h5ds7\" (UID: \"72765495-c470-41a5-b5a7-423025bdd6a7\") " pod="openstack/barbican796f-account-delete-h5ds7" Dec 05 07:11:02 crc kubenswrapper[4780]: I1205 07:11:02.043136 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gqpwk"] Dec 05 07:11:02 crc kubenswrapper[4780]: I1205 07:11:02.109664 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutrona927-account-delete-5chq6" Dec 05 07:11:02 crc kubenswrapper[4780]: I1205 07:11:02.138836 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gqpwk"] Dec 05 07:11:02 crc kubenswrapper[4780]: I1205 07:11:02.141120 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkvgj\" (UniqueName: \"kubernetes.io/projected/72765495-c470-41a5-b5a7-423025bdd6a7-kube-api-access-vkvgj\") pod \"barbican796f-account-delete-h5ds7\" (UID: \"72765495-c470-41a5-b5a7-423025bdd6a7\") " pod="openstack/barbican796f-account-delete-h5ds7" Dec 05 07:11:02 crc kubenswrapper[4780]: I1205 07:11:02.141208 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts\") pod \"barbican796f-account-delete-h5ds7\" (UID: \"72765495-c470-41a5-b5a7-423025bdd6a7\") " pod="openstack/barbican796f-account-delete-h5ds7" Dec 05 07:11:02 crc kubenswrapper[4780]: I1205 07:11:02.141996 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts\") pod \"barbican796f-account-delete-h5ds7\" (UID: \"72765495-c470-41a5-b5a7-423025bdd6a7\") " pod="openstack/barbican796f-account-delete-h5ds7" Dec 05 07:11:02 crc kubenswrapper[4780]: E1205 07:11:02.144223 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 05 07:11:02 crc kubenswrapper[4780]: E1205 07:11:02.144284 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data podName:1e6efd4f-660c-44e1-bf69-8b1cec6a6e85 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:03.144267187 +0000 UTC m=+1497.213783519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data") pod "rabbitmq-cell1-server-0" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85") : configmap "rabbitmq-cell1-config-data" not found Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:02.227058 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" containerName="ovsdbserver-nb" containerID="cri-o://5c9c067c92697e48033b3641b520cfc47f50b10a41d6b3d91152e79157a514bf" gracePeriod=300 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:02.254059 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkvgj\" (UniqueName: \"kubernetes.io/projected/72765495-c470-41a5-b5a7-423025bdd6a7-kube-api-access-vkvgj\") pod \"barbican796f-account-delete-h5ds7\" (UID: \"72765495-c470-41a5-b5a7-423025bdd6a7\") " pod="openstack/barbican796f-account-delete-h5ds7" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:02.341638 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican796f-account-delete-h5ds7" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:02.761663 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aa3ab37e-e167-44dd-985c-c8f6b067cfdd/ovsdbserver-nb/0.log" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:02.761978 4780 generic.go:334] "Generic (PLEG): container finished" podID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" containerID="d7f5fd7515ed34f074ee09f78ddd69456ef45c158b4ca80becb54c10be0aea32" exitCode=2 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:02.761998 4780 generic.go:334] "Generic (PLEG): container finished" podID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" containerID="5c9c067c92697e48033b3641b520cfc47f50b10a41d6b3d91152e79157a514bf" exitCode=143 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:02.859453 4780 generic.go:334] "Generic (PLEG): container finished" podID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerID="9654c7269b622680dcb56608c38fc0a232f404664727ed94cb9dd7668100f74a" exitCode=2 Dec 05 07:11:03 crc kubenswrapper[4780]: E1205 07:11:03.204405 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 05 07:11:03 crc kubenswrapper[4780]: E1205 07:11:03.204685 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data podName:1e6efd4f-660c-44e1-bf69-8b1cec6a6e85 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:05.20466391 +0000 UTC m=+1499.274180242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data") pod "rabbitmq-cell1-server-0" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85") : configmap "rabbitmq-cell1-config-data" not found Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.312546 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79e9679-696f-498c-a1c0-d2d465c637fd" path="/var/lib/kubelet/pods/c79e9679-696f-498c-a1c0-d2d465c637fd/volumes" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.313536 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ecabbe-038f-4714-b9a1-5f2efef47afd" path="/var/lib/kubelet/pods/d8ecabbe-038f-4714-b9a1-5f2efef47afd/volumes" Dec 05 07:11:03 crc kubenswrapper[4780]: E1205 07:11:03.314103 4780 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.175s" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.314128 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa3ab37e-e167-44dd-985c-c8f6b067cfdd","Type":"ContainerDied","Data":"d7f5fd7515ed34f074ee09f78ddd69456ef45c158b4ca80becb54c10be0aea32"} Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.314158 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa3ab37e-e167-44dd-985c-c8f6b067cfdd","Type":"ContainerDied","Data":"5c9c067c92697e48033b3641b520cfc47f50b10a41d6b3d91152e79157a514bf"} Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.314200 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffce971d-fa60-450d-a347-29ba2a9c9c84","Type":"ContainerDied","Data":"9654c7269b622680dcb56608c38fc0a232f404664727ed94cb9dd7668100f74a"} Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.314215 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placementa463-account-delete-wsgnm"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.318819 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/ovn-controller-ovs-lq2sf" secret="" err="secret \"ovncontroller-ovncontroller-dockercfg-nrvxm\" not found" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.336475 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementa463-account-delete-wsgnm"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.336511 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.336523 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-5f9r8"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.336534 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-5f9r8"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.336550 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mqbgb"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.336561 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mqbgb"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.336573 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder1be5-account-delete-6mpnc"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337596 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder1be5-account-delete-6mpnc"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337611 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7kv9n"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337621 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fs2vs"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337633 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-sgn25"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337646 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7kv9n"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337657 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-54wt4"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337667 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-lq2sf"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337677 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337718 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337730 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d68479b85-xqbrx"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.337743 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell079a0-account-delete-dfjw8"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.338495 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2mprp"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.338526 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-jzmzj"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.338535 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-zhcvf"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.338546 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-jzmzj"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.338555 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.338568 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-zhcvf"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.338579 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2mprp"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.338594 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell079a0-account-delete-dfjw8"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.338606 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapiebdd-account-delete-2px9p"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339345 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-669bccb86b-8cjsq"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339363 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapiebdd-account-delete-2px9p"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339371 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339382 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339393 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339405 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-58fb69b8bc-qmkp5"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339416 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339425 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339436 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-fc64465bd-vwr2q"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339608 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" podUID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" containerName="barbican-keystone-listener-log" containerID="cri-o://0eb1f9f781814534359ecc748e52c6e4547659a97d7852a9bc35e6e85c9c72d4" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.339741 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementa463-account-delete-wsgnm" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.348548 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" podUID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" containerName="barbican-keystone-listener" containerID="cri-o://21cb52d533dbe56f4988844a69a64aaf8e041956d1ff9074d70672e4e95db8ee" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.355337 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.357211 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerName="nova-api-log" containerID="cri-o://91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.357942 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapiebdd-account-delete-2px9p" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.360864 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-54wt4" podUID="2fb4032b-ac6a-46ea-b301-500bf63d3518" containerName="openstack-network-exporter" containerID="cri-o://614162aaf287785a7cbbc0f2442a28903aec8fa26e737ecff5c47ce7458a1617" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.361099 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="43c681b8-252b-4d1a-8293-27528bc83ed8" containerName="glance-log" containerID="cri-o://4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.361222 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder1be5-account-delete-6mpnc" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.361226 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell079a0-account-delete-dfjw8" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.362118 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="43c681b8-252b-4d1a-8293-27528bc83ed8" containerName="glance-httpd" containerID="cri-o://64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.362202 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2a294e09-ff41-4fcc-81f4-2a674c77c239" containerName="glance-log" containerID="cri-o://e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.362288 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2a294e09-ff41-4fcc-81f4-2a674c77c239" containerName="glance-httpd" containerID="cri-o://0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.362345 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerName="openstack-network-exporter" containerID="cri-o://eb3b7412e25e8f35afb87b1a111e1efdac1f6846e9ac48835f6a92743cf44e0b" gracePeriod=300 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.362356 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" podUID="b7f1d4f8-b32f-4448-8db1-ff7299256169" containerName="dnsmasq-dns" containerID="cri-o://c66ff948bbb0223dc5ef04d22b4b1a8ff3bff1768e3f1c9bbb557cbcf9b1c5fc" gracePeriod=10 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.362669 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerName="proxy-httpd" containerID="cri-o://65ffda40cf80de35ae936a6d650ef297fd76dee5044c69bbd6c7b2b5e327da96" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.362869 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d68479b85-xqbrx" podUID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerName="neutron-api" containerID="cri-o://c64aac9dc2da1feacd133e1cbfed47f07ec40d71d95e0fe650627bb11646f1e3" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.362947 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerName="proxy-server" containerID="cri-o://71729bfa39e43aff8b7d4b4b743bda8c1770fd26a85d5b0e6a9fae0194a3feb3" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.362996 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerName="nova-api-api" containerID="cri-o://5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.363263 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d68479b85-xqbrx" podUID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerName="neutron-httpd" containerID="cri-o://34148739eeef05370a2f9f987ab32cbec201eca9fad402598ae56efaf7b63ca0" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.363306 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e9395104-b579-44d5-bbf0-69fe4d17406d" containerName="cinder-scheduler" containerID="cri-o://bbf7ba30828f7305d2c91dd07104e5ee99cdcba79c89362c856ebc2c639710e1" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.363390 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e9395104-b579-44d5-bbf0-69fe4d17406d" containerName="probe" containerID="cri-o://b7d4a3dac21d90122fe88d5308d7939f24f6b2475dc30bbbf81f06bd4930e1a3" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.363519 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-669bccb86b-8cjsq" podUID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" containerName="placement-log" containerID="cri-o://43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.363614 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-669bccb86b-8cjsq" podUID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" containerName="placement-api" containerID="cri-o://0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.363772 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-server" containerID="cri-o://197ec1cb2e42b7eee2a07e5abda174f729b39f1a30d1ee19cdf7fc349964c7dc" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.363942 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-updater" containerID="cri-o://f9700fc7cbaed727d4ba60770c2a1911c7899565ee6e7549689b2e0350e69b87" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.364040 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="swift-recon-cron" containerID="cri-o://21310f93372293bb789a92ae777f6b29d31624531841b6a89f2146486609c159" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.364093 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="rsync" containerID="cri-o://6914c82c6a21ad55bc14f021428bafdc5f53bb59cfc97dcdf06a93af43f76ed4" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.364133 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-expirer" containerID="cri-o://735e0179ac8e0b6856304c581093683f8810b9d6725ea83df77678572a5f9297" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.364177 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-updater" containerID="cri-o://f81d963680169349f2f9fb3728fe7259c5c1a6a053a4e334e212812bad3a43e5" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.364226 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-auditor" containerID="cri-o://977b8988c9e4ea255e060358102e1022eac55a01e723c56a5c68e57ee2a94e80" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.364280 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-replicator" containerID="cri-o://960fcbfc395793c247badb8fdde1b5984a271aa278bc9c88ddf14fc26e90bea3" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.364330 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-server" containerID="cri-o://81ec73c2cb79b863b687984909f69f4090990583d64f1c7fd7e543c07d0c1a61" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.365063 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-replicator" containerID="cri-o://2277d9a62f83380fb829fd9437023d5d7a6a251cbed820feb5e9eaa847bd436b" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.365136 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-auditor" containerID="cri-o://986982b89c1a073d0278b92d9a5c06cd37c1b46a70abd7b84b4c71b882785ce2" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.365192 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-replicator" containerID="cri-o://82189ab9e1551dc0f5140613417ba953bda692d106179c487635cae012edccd5" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.365232 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-auditor" containerID="cri-o://9225d4a3e6c1922ea527f674b2f5bde35d6a0f6a680ffb6e130f1dab6447551d" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.365272 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-server" containerID="cri-o://cf24f362016fc9a9b24061e240eb546322a73d461803e71a142f2ecfcd0d8c78" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.365672 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-reaper" containerID="cri-o://01c18b876687d6b2d2dd2ebf76cfbc99348debda854332c91ee5bc6b8029eb69" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.365803 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-59d58fb65c-nzf5k"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.366234 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-59d58fb65c-nzf5k" podUID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" containerName="barbican-worker-log" containerID="cri-o://fa0a6343d445a98183bd0e28c4205f4ee3dbabc1af80c9794439de122f2d4f70" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.366372 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-log" containerID="cri-o://091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.366431 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-59d58fb65c-nzf5k" podUID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" containerName="barbican-worker" containerID="cri-o://54efef79e6df78f9c7a79be7c0902ee44a3970e79099cd25bf9047386200ff4c" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.366571 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerName="cinder-api-log" containerID="cri-o://672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.366678 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-metadata" containerID="cri-o://7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.366741 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerName="cinder-api" containerID="cri-o://fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.414236 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44g4d\" (UniqueName: \"kubernetes.io/projected/c269c975-543e-44e0-ac7a-abf3f7a619dd-kube-api-access-44g4d\") pod \"placementa463-account-delete-wsgnm\" (UID: \"c269c975-543e-44e0-ac7a-abf3f7a619dd\") " pod="openstack/placementa463-account-delete-wsgnm" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.414581 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhx99\" (UniqueName: \"kubernetes.io/projected/6b6e1d3b-503e-49c8-8d33-bcaae571525c-kube-api-access-xhx99\") pod \"novacell079a0-account-delete-dfjw8\" (UID: \"6b6e1d3b-503e-49c8-8d33-bcaae571525c\") " pod="openstack/novacell079a0-account-delete-dfjw8" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.414640 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts\") pod \"novacell079a0-account-delete-dfjw8\" (UID: \"6b6e1d3b-503e-49c8-8d33-bcaae571525c\") " pod="openstack/novacell079a0-account-delete-dfjw8" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.414789 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv8rj\" (UniqueName: \"kubernetes.io/projected/52234708-ef2b-40c7-af1b-61e1890dd674-kube-api-access-vv8rj\") pod \"novaapiebdd-account-delete-2px9p\" (UID: \"52234708-ef2b-40c7-af1b-61e1890dd674\") " pod="openstack/novaapiebdd-account-delete-2px9p" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.414924 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts\") pod \"novaapiebdd-account-delete-2px9p\" (UID: \"52234708-ef2b-40c7-af1b-61e1890dd674\") " pod="openstack/novaapiebdd-account-delete-2px9p" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.415865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c269c975-543e-44e0-ac7a-abf3f7a619dd-operator-scripts\") pod \"placementa463-account-delete-wsgnm\" (UID: \"c269c975-543e-44e0-ac7a-abf3f7a619dd\") " pod="openstack/placementa463-account-delete-wsgnm" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.425701 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts\") pod \"cinder1be5-account-delete-6mpnc\" (UID: \"9c542de0-85ab-43f2-89ca-fb8a6c19e49d\") " pod="openstack/cinder1be5-account-delete-6mpnc" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.425840 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfstv\" (UniqueName: \"kubernetes.io/projected/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-kube-api-access-gfstv\") pod \"cinder1be5-account-delete-6mpnc\" (UID: \"9c542de0-85ab-43f2-89ca-fb8a6c19e49d\") " pod="openstack/cinder1be5-account-delete-6mpnc" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.427027 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-799c48f5f4-sm7kz"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.427231 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-799c48f5f4-sm7kz" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api-log" containerID="cri-o://1f72197d67bb45b009e4fc63d14efd6e5634ae9d06e9c8d83b9c4a8b9a6be45a" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.427371 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-799c48f5f4-sm7kz" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api" containerID="cri-o://d247be1b147a98f7d05a4bb3c8635747189f02eca874ffceb138264c83747cc4" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: E1205 07:11:03.427478 4780 configmap.go:193] Couldn't get configMap openstack/ovncontroller-scripts: configmap "ovncontroller-scripts" not found Dec 05 07:11:03 crc kubenswrapper[4780]: E1205 07:11:03.427542 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts podName:52793d91-2b27-4926-9293-78f555401415 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:03.927524193 +0000 UTC m=+1497.997040525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts") pod "ovn-controller-ovs-lq2sf" (UID: "52793d91-2b27-4926-9293-78f555401415") : configmap "ovncontroller-scripts" not found Dec 05 07:11:03 crc kubenswrapper[4780]: E1205 07:11:03.428289 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 05 07:11:03 crc kubenswrapper[4780]: E1205 07:11:03.428347 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data podName:f5032d09-8298-4941-8b4b-0f24a57b8ced nodeName:}" failed. No retries permitted until 2025-12-05 07:11:03.928330914 +0000 UTC m=+1497.997847246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data") pod "rabbitmq-server-0" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced") : configmap "rabbitmq-config-data" not found Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.463599 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.487387 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-m5dbk"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.505629 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-m5dbk"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.549819 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts\") pod \"cinder1be5-account-delete-6mpnc\" (UID: \"9c542de0-85ab-43f2-89ca-fb8a6c19e49d\") " pod="openstack/cinder1be5-account-delete-6mpnc" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.549908 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfstv\" (UniqueName: \"kubernetes.io/projected/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-kube-api-access-gfstv\") pod \"cinder1be5-account-delete-6mpnc\" (UID: \"9c542de0-85ab-43f2-89ca-fb8a6c19e49d\") " pod="openstack/cinder1be5-account-delete-6mpnc" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.549964 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44g4d\" (UniqueName: \"kubernetes.io/projected/c269c975-543e-44e0-ac7a-abf3f7a619dd-kube-api-access-44g4d\") pod \"placementa463-account-delete-wsgnm\" (UID: \"c269c975-543e-44e0-ac7a-abf3f7a619dd\") " pod="openstack/placementa463-account-delete-wsgnm" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.549988 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhx99\" (UniqueName: \"kubernetes.io/projected/6b6e1d3b-503e-49c8-8d33-bcaae571525c-kube-api-access-xhx99\") pod \"novacell079a0-account-delete-dfjw8\" (UID: \"6b6e1d3b-503e-49c8-8d33-bcaae571525c\") " pod="openstack/novacell079a0-account-delete-dfjw8" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.550013 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts\") pod \"novacell079a0-account-delete-dfjw8\" (UID: \"6b6e1d3b-503e-49c8-8d33-bcaae571525c\") " pod="openstack/novacell079a0-account-delete-dfjw8" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.550052 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv8rj\" (UniqueName: \"kubernetes.io/projected/52234708-ef2b-40c7-af1b-61e1890dd674-kube-api-access-vv8rj\") pod \"novaapiebdd-account-delete-2px9p\" (UID: \"52234708-ef2b-40c7-af1b-61e1890dd674\") " pod="openstack/novaapiebdd-account-delete-2px9p" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.550106 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts\") pod \"novaapiebdd-account-delete-2px9p\" (UID: \"52234708-ef2b-40c7-af1b-61e1890dd674\") " pod="openstack/novaapiebdd-account-delete-2px9p" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.550147 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c269c975-543e-44e0-ac7a-abf3f7a619dd-operator-scripts\") pod \"placementa463-account-delete-wsgnm\" (UID: \"c269c975-543e-44e0-ac7a-abf3f7a619dd\") " pod="openstack/placementa463-account-delete-wsgnm" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.557107 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts\") pod \"cinder1be5-account-delete-6mpnc\" (UID: \"9c542de0-85ab-43f2-89ca-fb8a6c19e49d\") " pod="openstack/cinder1be5-account-delete-6mpnc" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.557974 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts\") pod \"novacell079a0-account-delete-dfjw8\" (UID: \"6b6e1d3b-503e-49c8-8d33-bcaae571525c\") " pod="openstack/novacell079a0-account-delete-dfjw8" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.558679 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts\") pod \"novaapiebdd-account-delete-2px9p\" (UID: \"52234708-ef2b-40c7-af1b-61e1890dd674\") " pod="openstack/novaapiebdd-account-delete-2px9p" Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.559912 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.564853 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="828f916b-54ac-4498-b1a7-139334944d9b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://059a93cd192837f3113a8d0e3807d7b3f8ab87f622645af37531a78b78d5d7f8" gracePeriod=30 Dec 05 07:11:03 crc kubenswrapper[4780]: I1205 07:11:03.565799 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c269c975-543e-44e0-ac7a-abf3f7a619dd-operator-scripts\") pod \"placementa463-account-delete-wsgnm\" (UID: \"c269c975-543e-44e0-ac7a-abf3f7a619dd\") " pod="openstack/placementa463-account-delete-wsgnm" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:03.599934 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44g4d\" (UniqueName: \"kubernetes.io/projected/c269c975-543e-44e0-ac7a-abf3f7a619dd-kube-api-access-44g4d\") pod \"placementa463-account-delete-wsgnm\" (UID: \"c269c975-543e-44e0-ac7a-abf3f7a619dd\") " pod="openstack/placementa463-account-delete-wsgnm" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:03.601844 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2782-account-create-update-z246t"] Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:03.602563 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfstv\" (UniqueName: \"kubernetes.io/projected/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-kube-api-access-gfstv\") pod \"cinder1be5-account-delete-6mpnc\" (UID: \"9c542de0-85ab-43f2-89ca-fb8a6c19e49d\") " pod="openstack/cinder1be5-account-delete-6mpnc" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:03.609180 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhx99\" (UniqueName: \"kubernetes.io/projected/6b6e1d3b-503e-49c8-8d33-bcaae571525c-kube-api-access-xhx99\") pod \"novacell079a0-account-delete-dfjw8\" (UID: \"6b6e1d3b-503e-49c8-8d33-bcaae571525c\") " pod="openstack/novacell079a0-account-delete-dfjw8" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:03.622432 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2782-account-create-update-z246t"] Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:03.652531 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv8rj\" (UniqueName: \"kubernetes.io/projected/52234708-ef2b-40c7-af1b-61e1890dd674-kube-api-access-vv8rj\") pod \"novaapiebdd-account-delete-2px9p\" (UID: \"52234708-ef2b-40c7-af1b-61e1890dd674\") " pod="openstack/novaapiebdd-account-delete-2px9p" Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:03.682766 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:03.780214 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" containerName="rabbitmq" containerID="cri-o://530ca0e5e1babec81694e9c8b93e6ef4428014a40ae28b2534d753e93c90ee0d" gracePeriod=604800 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:03.781291 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:03.781491 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="33af7252-1228-4051-bab0-cfcaee04fe1d" containerName="nova-scheduler-scheduler" containerID="cri-o://e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20" gracePeriod=30 Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:03.850577 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:03.873613 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerName="ovsdbserver-sb" containerID="cri-o://ab7e9427aa59f58e7de8e8d8a84a95133b083fbe1830a42a48c9086af76847dd" gracePeriod=300 Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:03.898159 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:03.898223 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerName="ovn-northd" Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:03.898479 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7e9427aa59f58e7de8e8d8a84a95133b083fbe1830a42a48c9086af76847dd" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:03.979739 4780 configmap.go:193] Couldn't get configMap openstack/ovncontroller-scripts: configmap "ovncontroller-scripts" not found Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:03.995569 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts podName:52793d91-2b27-4926-9293-78f555401415 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:04.995527197 +0000 UTC m=+1499.065043529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts") pod "ovn-controller-ovs-lq2sf" (UID: "52793d91-2b27-4926-9293-78f555401415") : configmap "ovncontroller-scripts" not found Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:03.988638 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7e9427aa59f58e7de8e8d8a84a95133b083fbe1830a42a48c9086af76847dd" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:03.982330 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:03.996199 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data podName:f5032d09-8298-4941-8b4b-0f24a57b8ced nodeName:}" failed. No retries permitted until 2025-12-05 07:11:04.996189725 +0000 UTC m=+1499.065706057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data") pod "rabbitmq-server-0" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced") : configmap "rabbitmq-config-data" not found Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.009173 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:04.075103 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab7e9427aa59f58e7de8e8d8a84a95133b083fbe1830a42a48c9086af76847dd is running failed: container process not found" containerID="ab7e9427aa59f58e7de8e8d8a84a95133b083fbe1830a42a48c9086af76847dd" cmd=["/usr/bin/pidof","ovsdb-server"] Dec 05 07:11:04 crc kubenswrapper[4780]: E1205 07:11:04.075159 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab7e9427aa59f58e7de8e8d8a84a95133b083fbe1830a42a48c9086af76847dd is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerName="ovsdbserver-sb" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.082225 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementa463-account-delete-wsgnm" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.118404 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapiebdd-account-delete-2px9p" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.141977 4780 generic.go:334] "Generic (PLEG): container finished" podID="2a294e09-ff41-4fcc-81f4-2a674c77c239" containerID="e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208" exitCode=143 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.142096 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a294e09-ff41-4fcc-81f4-2a674c77c239","Type":"ContainerDied","Data":"e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.167369 4780 generic.go:334] "Generic (PLEG): container finished" podID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerID="91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124" exitCode=143 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.168193 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder1be5-account-delete-6mpnc" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.179964 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d3dd0d5-7b46-4ad7-b31d-784587823a79" path="/var/lib/kubelet/pods/0d3dd0d5-7b46-4ad7-b31d-784587823a79/volumes" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.180637 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b6ab49-8909-4604-bbf0-1d5475a52cdb" path="/var/lib/kubelet/pods/30b6ab49-8909-4604-bbf0-1d5475a52cdb/volumes" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.182192 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44428dc2-af95-4541-b700-7ac3b81164d5" path="/var/lib/kubelet/pods/44428dc2-af95-4541-b700-7ac3b81164d5/volumes" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.183506 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87645633-adc7-4611-ac03-0bd01623a44e" path="/var/lib/kubelet/pods/87645633-adc7-4611-ac03-0bd01623a44e/volumes" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.186776 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a154e1e8-52d0-43c2-8685-cd8769db58d0" path="/var/lib/kubelet/pods/a154e1e8-52d0-43c2-8685-cd8769db58d0/volumes" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.187401 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c67223f0-4471-424c-b74d-886cec703c8a" path="/var/lib/kubelet/pods/c67223f0-4471-424c-b74d-886cec703c8a/volumes" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.188323 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d536c619-112b-48c1-8efe-2e700ead9f8b" path="/var/lib/kubelet/pods/d536c619-112b-48c1-8efe-2e700ead9f8b/volumes" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.189742 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e02b8260-4e63-48c3-b879-3840b95b60d5" path="/var/lib/kubelet/pods/e02b8260-4e63-48c3-b879-3840b95b60d5/volumes" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.190781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0f8b72a-b08b-4c2f-98dc-242016b6f846","Type":"ContainerDied","Data":"91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.199572 4780 generic.go:334] "Generic (PLEG): container finished" podID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerID="672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156" exitCode=143 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.199654 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf87b821-f0c0-41df-a1ee-f2c44a09cc82","Type":"ContainerDied","Data":"672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.223867 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="735e0179ac8e0b6856304c581093683f8810b9d6725ea83df77678572a5f9297" exitCode=0 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.223910 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="f81d963680169349f2f9fb3728fe7259c5c1a6a053a4e334e212812bad3a43e5" exitCode=0 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.223918 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="977b8988c9e4ea255e060358102e1022eac55a01e723c56a5c68e57ee2a94e80" exitCode=0 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.223925 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="960fcbfc395793c247badb8fdde1b5984a271aa278bc9c88ddf14fc26e90bea3" exitCode=0 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.223932 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="f9700fc7cbaed727d4ba60770c2a1911c7899565ee6e7549689b2e0350e69b87" exitCode=0 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.223940 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="82189ab9e1551dc0f5140613417ba953bda692d106179c487635cae012edccd5" exitCode=0 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.223946 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="01c18b876687d6b2d2dd2ebf76cfbc99348debda854332c91ee5bc6b8029eb69" exitCode=0 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.223957 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="986982b89c1a073d0278b92d9a5c06cd37c1b46a70abd7b84b4c71b882785ce2" exitCode=0 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.223964 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="2277d9a62f83380fb829fd9437023d5d7a6a251cbed820feb5e9eaa847bd436b" exitCode=0 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.224005 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"735e0179ac8e0b6856304c581093683f8810b9d6725ea83df77678572a5f9297"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.224031 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"f81d963680169349f2f9fb3728fe7259c5c1a6a053a4e334e212812bad3a43e5"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.224042 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"977b8988c9e4ea255e060358102e1022eac55a01e723c56a5c68e57ee2a94e80"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.224052 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"960fcbfc395793c247badb8fdde1b5984a271aa278bc9c88ddf14fc26e90bea3"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.224060 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"f9700fc7cbaed727d4ba60770c2a1911c7899565ee6e7549689b2e0350e69b87"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.224069 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"82189ab9e1551dc0f5140613417ba953bda692d106179c487635cae012edccd5"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.224220 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"01c18b876687d6b2d2dd2ebf76cfbc99348debda854332c91ee5bc6b8029eb69"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.224233 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"986982b89c1a073d0278b92d9a5c06cd37c1b46a70abd7b84b4c71b882785ce2"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.224242 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"2277d9a62f83380fb829fd9437023d5d7a6a251cbed820feb5e9eaa847bd436b"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.239842 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kqdm6"] Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.244605 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.257527 4780 generic.go:334] "Generic (PLEG): container finished" podID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" containerID="0eb1f9f781814534359ecc748e52c6e4547659a97d7852a9bc35e6e85c9c72d4" exitCode=143 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.257635 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" event={"ID":"aa86c0d1-d6cb-4566-b4b3-352c690b0a96","Type":"ContainerDied","Data":"0eb1f9f781814534359ecc748e52c6e4547659a97d7852a9bc35e6e85c9c72d4"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.292092 4780 generic.go:334] "Generic (PLEG): container finished" podID="b7f1d4f8-b32f-4448-8db1-ff7299256169" containerID="c66ff948bbb0223dc5ef04d22b4b1a8ff3bff1768e3f1c9bbb557cbcf9b1c5fc" exitCode=0 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.292144 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" event={"ID":"b7f1d4f8-b32f-4448-8db1-ff7299256169","Type":"ContainerDied","Data":"c66ff948bbb0223dc5ef04d22b4b1a8ff3bff1768e3f1c9bbb557cbcf9b1c5fc"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.292912 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kqdm6"] Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.308362 4780 generic.go:334] "Generic (PLEG): container finished" podID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerID="1f72197d67bb45b009e4fc63d14efd6e5634ae9d06e9c8d83b9c4a8b9a6be45a" exitCode=143 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.308448 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-799c48f5f4-sm7kz" event={"ID":"a6b8df94-a979-4c1a-bffd-5f5052f0ad12","Type":"ContainerDied","Data":"1f72197d67bb45b009e4fc63d14efd6e5634ae9d06e9c8d83b9c4a8b9a6be45a"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.319462 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-54wt4_2fb4032b-ac6a-46ea-b301-500bf63d3518/openstack-network-exporter/0.log" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.319500 4780 generic.go:334] "Generic (PLEG): container finished" podID="2fb4032b-ac6a-46ea-b301-500bf63d3518" containerID="614162aaf287785a7cbbc0f2442a28903aec8fa26e737ecff5c47ce7458a1617" exitCode=2 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.319567 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-54wt4" event={"ID":"2fb4032b-ac6a-46ea-b301-500bf63d3518","Type":"ContainerDied","Data":"614162aaf287785a7cbbc0f2442a28903aec8fa26e737ecff5c47ce7458a1617"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.333106 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="621ea4dd-7bc5-4404-9369-1cd99335155d" containerName="galera" containerID="cri-o://d68e53a20f7b0772cf31f43fd6387417cf438c45cf97337ca3c20b74894ceb64" gracePeriod=30 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.339984 4780 generic.go:334] "Generic (PLEG): container finished" podID="65736cb4-25b2-402e-8dfe-d00b218a274b" containerID="b9808fa835d43d815f703095686bccb9a6eedb6aab78ee5755aeddb342d50d7a" exitCode=137 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.357758 4780 generic.go:334] "Generic (PLEG): container finished" podID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" containerID="43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7" exitCode=143 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.357840 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669bccb86b-8cjsq" event={"ID":"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7","Type":"ContainerDied","Data":"43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.367344 4780 generic.go:334] "Generic (PLEG): container finished" podID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" containerID="fa0a6343d445a98183bd0e28c4205f4ee3dbabc1af80c9794439de122f2d4f70" exitCode=143 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.367669 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59d58fb65c-nzf5k" event={"ID":"8d9c218c-8cf4-468d-a946-bb14fc0024b0","Type":"ContainerDied","Data":"fa0a6343d445a98183bd0e28c4205f4ee3dbabc1af80c9794439de122f2d4f70"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.414513 4780 generic.go:334] "Generic (PLEG): container finished" podID="43c681b8-252b-4d1a-8293-27528bc83ed8" containerID="4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14" exitCode=143 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.414642 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43c681b8-252b-4d1a-8293-27528bc83ed8","Type":"ContainerDied","Data":"4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.425144 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-catalog-content\") pod \"community-operators-kqdm6\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.425358 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-utilities\") pod \"community-operators-kqdm6\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.425415 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t42wq\" (UniqueName: \"kubernetes.io/projected/5356607a-a085-4294-8d0a-22c641259745-kube-api-access-t42wq\") pod \"community-operators-kqdm6\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.434161 4780 generic.go:334] "Generic (PLEG): container finished" podID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerID="091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21" exitCode=143 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.434682 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11","Type":"ContainerDied","Data":"091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.438487 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fee336d1-2c89-4ccb-b6ea-69a4697b7a29/ovsdbserver-sb/0.log" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.438521 4780 generic.go:334] "Generic (PLEG): container finished" podID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerID="eb3b7412e25e8f35afb87b1a111e1efdac1f6846e9ac48835f6a92743cf44e0b" exitCode=2 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.438777 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fee336d1-2c89-4ccb-b6ea-69a4697b7a29","Type":"ContainerDied","Data":"eb3b7412e25e8f35afb87b1a111e1efdac1f6846e9ac48835f6a92743cf44e0b"} Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.494677 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f5032d09-8298-4941-8b4b-0f24a57b8ced" containerName="rabbitmq" containerID="cri-o://5450b625e2fd6628a65ff330106c052fa609d51529eba7c91d50eb2a2c2bfed0" gracePeriod=604800 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.527699 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-catalog-content\") pod \"community-operators-kqdm6\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.527816 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-utilities\") pod \"community-operators-kqdm6\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.527857 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t42wq\" (UniqueName: \"kubernetes.io/projected/5356607a-a085-4294-8d0a-22c641259745-kube-api-access-t42wq\") pod \"community-operators-kqdm6\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.529150 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-catalog-content\") pod \"community-operators-kqdm6\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.529386 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-utilities\") pod \"community-operators-kqdm6\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.551277 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell079a0-account-delete-dfjw8" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.565591 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t42wq\" (UniqueName: \"kubernetes.io/projected/5356607a-a085-4294-8d0a-22c641259745-kube-api-access-t42wq\") pod \"community-operators-kqdm6\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.572124 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aa3ab37e-e167-44dd-985c-c8f6b067cfdd/ovsdbserver-nb/0.log" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.572191 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.733893 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-metrics-certs-tls-certs\") pod \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.734280 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55s2d\" (UniqueName: \"kubernetes.io/projected/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-kube-api-access-55s2d\") pod \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.734326 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-combined-ca-bundle\") pod \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.734356 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.734384 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdb-rundir\") pod \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.734473 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-config\") pod \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.734508 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-scripts\") pod \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.734606 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdbserver-nb-tls-certs\") pod \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\" (UID: \"aa3ab37e-e167-44dd-985c-c8f6b067cfdd\") " Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.736156 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "aa3ab37e-e167-44dd-985c-c8f6b067cfdd" (UID: "aa3ab37e-e167-44dd-985c-c8f6b067cfdd"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.736705 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-config" (OuterVolumeSpecName: "config") pod "aa3ab37e-e167-44dd-985c-c8f6b067cfdd" (UID: "aa3ab37e-e167-44dd-985c-c8f6b067cfdd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.737313 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-scripts" (OuterVolumeSpecName: "scripts") pod "aa3ab37e-e167-44dd-985c-c8f6b067cfdd" (UID: "aa3ab37e-e167-44dd-985c-c8f6b067cfdd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.753254 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-kube-api-access-55s2d" (OuterVolumeSpecName: "kube-api-access-55s2d") pod "aa3ab37e-e167-44dd-985c-c8f6b067cfdd" (UID: "aa3ab37e-e167-44dd-985c-c8f6b067cfdd"). InnerVolumeSpecName "kube-api-access-55s2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.809000 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "aa3ab37e-e167-44dd-985c-c8f6b067cfdd" (UID: "aa3ab37e-e167-44dd-985c-c8f6b067cfdd"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.843075 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa3ab37e-e167-44dd-985c-c8f6b067cfdd" (UID: "aa3ab37e-e167-44dd-985c-c8f6b067cfdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.847216 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55s2d\" (UniqueName: \"kubernetes.io/projected/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-kube-api-access-55s2d\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.847264 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.847275 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.847284 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.847293 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.878546 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.927487 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.927762 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c" containerName="nova-cell1-conductor-conductor" containerID="cri-o://6d62e69774c5587c8e04a48087fe8984cb21a4165f3b57401aaae1dcddc7f33a" gracePeriod=30 Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.936052 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "aa3ab37e-e167-44dd-985c-c8f6b067cfdd" (UID: "aa3ab37e-e167-44dd-985c-c8f6b067cfdd"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.944382 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.951559 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.951587 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.951596 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.973489 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "aa3ab37e-e167-44dd-985c-c8f6b067cfdd" (UID: "aa3ab37e-e167-44dd-985c-c8f6b067cfdd"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:04 crc kubenswrapper[4780]: I1205 07:11:04.974037 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tdvq2"] Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:04.997008 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tdvq2"] Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.027944 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jrvl"] Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.043573 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.043865 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="29f97591-4528-4ed0-918c-b6de191c452a" containerName="nova-cell0-conductor-conductor" containerID="cri-o://d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a" gracePeriod=30 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.053009 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa3ab37e-e167-44dd-985c-c8f6b067cfdd-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:05 crc kubenswrapper[4780]: E1205 07:11:05.053084 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 05 07:11:05 crc kubenswrapper[4780]: E1205 07:11:05.053135 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data podName:f5032d09-8298-4941-8b4b-0f24a57b8ced nodeName:}" failed. No retries permitted until 2025-12-05 07:11:07.053114335 +0000 UTC m=+1501.122630667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data") pod "rabbitmq-server-0" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced") : configmap "rabbitmq-config-data" not found Dec 05 07:11:05 crc kubenswrapper[4780]: E1205 07:11:05.053398 4780 configmap.go:193] Couldn't get configMap openstack/ovncontroller-scripts: configmap "ovncontroller-scripts" not found Dec 05 07:11:05 crc kubenswrapper[4780]: E1205 07:11:05.053423 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts podName:52793d91-2b27-4926-9293-78f555401415 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:07.053415443 +0000 UTC m=+1501.122931775 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts") pod "ovn-controller-ovs-lq2sf" (UID: "52793d91-2b27-4926-9293-78f555401415") : configmap "ovncontroller-scripts" not found Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.058520 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jrvl"] Dec 05 07:11:05 crc kubenswrapper[4780]: E1205 07:11:05.258214 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 05 07:11:05 crc kubenswrapper[4780]: E1205 07:11:05.258288 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data podName:1e6efd4f-660c-44e1-bf69-8b1cec6a6e85 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:09.258271788 +0000 UTC m=+1503.327788120 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data") pod "rabbitmq-cell1-server-0" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85") : configmap "rabbitmq-cell1-config-data" not found Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.511280 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="6914c82c6a21ad55bc14f021428bafdc5f53bb59cfc97dcdf06a93af43f76ed4" exitCode=0 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.511583 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="81ec73c2cb79b863b687984909f69f4090990583d64f1c7fd7e543c07d0c1a61" exitCode=0 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.511592 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="9225d4a3e6c1922ea527f674b2f5bde35d6a0f6a680ffb6e130f1dab6447551d" exitCode=0 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.511600 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="cf24f362016fc9a9b24061e240eb546322a73d461803e71a142f2ecfcd0d8c78" exitCode=0 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.511606 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="197ec1cb2e42b7eee2a07e5abda174f729b39f1a30d1ee19cdf7fc349964c7dc" exitCode=0 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.511671 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"6914c82c6a21ad55bc14f021428bafdc5f53bb59cfc97dcdf06a93af43f76ed4"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.511697 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"81ec73c2cb79b863b687984909f69f4090990583d64f1c7fd7e543c07d0c1a61"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.511714 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"9225d4a3e6c1922ea527f674b2f5bde35d6a0f6a680ffb6e130f1dab6447551d"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.511726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"cf24f362016fc9a9b24061e240eb546322a73d461803e71a142f2ecfcd0d8c78"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.511737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"197ec1cb2e42b7eee2a07e5abda174f729b39f1a30d1ee19cdf7fc349964c7dc"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.549183 4780 generic.go:334] "Generic (PLEG): container finished" podID="828f916b-54ac-4498-b1a7-139334944d9b" containerID="059a93cd192837f3113a8d0e3807d7b3f8ab87f622645af37531a78b78d5d7f8" exitCode=0 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.549293 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"828f916b-54ac-4498-b1a7-139334944d9b","Type":"ContainerDied","Data":"059a93cd192837f3113a8d0e3807d7b3f8ab87f622645af37531a78b78d5d7f8"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.640613 4780 generic.go:334] "Generic (PLEG): container finished" podID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerID="71729bfa39e43aff8b7d4b4b743bda8c1770fd26a85d5b0e6a9fae0194a3feb3" exitCode=0 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.640648 4780 generic.go:334] "Generic (PLEG): container finished" podID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerID="65ffda40cf80de35ae936a6d650ef297fd76dee5044c69bbd6c7b2b5e327da96" exitCode=0 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.640700 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" event={"ID":"5fd70346-51cf-44fc-8cea-48ee35deadb0","Type":"ContainerDied","Data":"71729bfa39e43aff8b7d4b4b743bda8c1770fd26a85d5b0e6a9fae0194a3feb3"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.640726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" event={"ID":"5fd70346-51cf-44fc-8cea-48ee35deadb0","Type":"ContainerDied","Data":"65ffda40cf80de35ae936a6d650ef297fd76dee5044c69bbd6c7b2b5e327da96"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.685282 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aa3ab37e-e167-44dd-985c-c8f6b067cfdd/ovsdbserver-nb/0.log" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.685353 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aa3ab37e-e167-44dd-985c-c8f6b067cfdd","Type":"ContainerDied","Data":"94f34fd57629a6d65f4cd39e9ad6afd09fdbde4efc61eda7e658a5827fd74481"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.685388 4780 scope.go:117] "RemoveContainer" containerID="d7f5fd7515ed34f074ee09f78ddd69456ef45c158b4ca80becb54c10be0aea32" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.685512 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.735006 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fee336d1-2c89-4ccb-b6ea-69a4697b7a29/ovsdbserver-sb/0.log" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.735095 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.736074 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_fee336d1-2c89-4ccb-b6ea-69a4697b7a29/ovsdbserver-sb/0.log" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.736142 4780 generic.go:334] "Generic (PLEG): container finished" podID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerID="ab7e9427aa59f58e7de8e8d8a84a95133b083fbe1830a42a48c9086af76847dd" exitCode=143 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.736199 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fee336d1-2c89-4ccb-b6ea-69a4697b7a29","Type":"ContainerDied","Data":"ab7e9427aa59f58e7de8e8d8a84a95133b083fbe1830a42a48c9086af76847dd"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.774325 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="621ea4dd-7bc5-4404-9369-1cd99335155d" containerName="galera" probeResult="failure" output="command timed out" Dec 05 07:11:05 crc kubenswrapper[4780]: E1205 07:11:05.774699 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d62e69774c5587c8e04a48087fe8984cb21a4165f3b57401aaae1dcddc7f33a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.788602 4780 generic.go:334] "Generic (PLEG): container finished" podID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerID="34148739eeef05370a2f9f987ab32cbec201eca9fad402598ae56efaf7b63ca0" exitCode=0 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.788715 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d68479b85-xqbrx" event={"ID":"e5443f43-c1d5-4563-a28c-63b54fd78ee6","Type":"ContainerDied","Data":"34148739eeef05370a2f9f987ab32cbec201eca9fad402598ae56efaf7b63ca0"} Dec 05 07:11:05 crc kubenswrapper[4780]: E1205 07:11:05.796666 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d62e69774c5587c8e04a48087fe8984cb21a4165f3b57401aaae1dcddc7f33a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.811169 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-metrics-certs-tls-certs\") pod \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.811330 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdbserver-sb-tls-certs\") pod \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.811378 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.811545 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9zgd\" (UniqueName: \"kubernetes.io/projected/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-kube-api-access-r9zgd\") pod \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.811619 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-config\") pod \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.811654 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdb-rundir\") pod \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.811764 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-combined-ca-bundle\") pod \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.811819 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-scripts\") pod \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\" (UID: \"fee336d1-2c89-4ccb-b6ea-69a4697b7a29\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.815922 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "fee336d1-2c89-4ccb-b6ea-69a4697b7a29" (UID: "fee336d1-2c89-4ccb-b6ea-69a4697b7a29"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.816601 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-config" (OuterVolumeSpecName: "config") pod "fee336d1-2c89-4ccb-b6ea-69a4697b7a29" (UID: "fee336d1-2c89-4ccb-b6ea-69a4697b7a29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.818349 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-scripts" (OuterVolumeSpecName: "scripts") pod "fee336d1-2c89-4ccb-b6ea-69a4697b7a29" (UID: "fee336d1-2c89-4ccb-b6ea-69a4697b7a29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.824351 4780 generic.go:334] "Generic (PLEG): container finished" podID="e9395104-b579-44d5-bbf0-69fe4d17406d" containerID="b7d4a3dac21d90122fe88d5308d7939f24f6b2475dc30bbbf81f06bd4930e1a3" exitCode=0 Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.824828 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9395104-b579-44d5-bbf0-69fe4d17406d","Type":"ContainerDied","Data":"b7d4a3dac21d90122fe88d5308d7939f24f6b2475dc30bbbf81f06bd4930e1a3"} Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.829518 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "fee336d1-2c89-4ccb-b6ea-69a4697b7a29" (UID: "fee336d1-2c89-4ccb-b6ea-69a4697b7a29"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.837442 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-kube-api-access-r9zgd" (OuterVolumeSpecName: "kube-api-access-r9zgd") pod "fee336d1-2c89-4ccb-b6ea-69a4697b7a29" (UID: "fee336d1-2c89-4ccb-b6ea-69a4697b7a29"). InnerVolumeSpecName "kube-api-access-r9zgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.858391 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7d68479b85-xqbrx" podUID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": dial tcp 10.217.0.159:9696: connect: connection refused" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.871224 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:11:05 crc kubenswrapper[4780]: E1205 07:11:05.871595 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d62e69774c5587c8e04a48087fe8984cb21a4165f3b57401aaae1dcddc7f33a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 07:11:05 crc kubenswrapper[4780]: E1205 07:11:05.871633 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c" containerName="nova-cell1-conductor-conductor" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.895137 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.907166 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fee336d1-2c89-4ccb-b6ea-69a4697b7a29" (UID: "fee336d1-2c89-4ccb-b6ea-69a4697b7a29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.907236 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915358 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-internal-tls-certs\") pod \"5fd70346-51cf-44fc-8cea-48ee35deadb0\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915397 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-config-data\") pod \"5fd70346-51cf-44fc-8cea-48ee35deadb0\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915447 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpvh8\" (UniqueName: \"kubernetes.io/projected/65736cb4-25b2-402e-8dfe-d00b218a274b-kube-api-access-rpvh8\") pod \"65736cb4-25b2-402e-8dfe-d00b218a274b\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915552 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config-secret\") pod \"65736cb4-25b2-402e-8dfe-d00b218a274b\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915577 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-etc-swift\") pod \"5fd70346-51cf-44fc-8cea-48ee35deadb0\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915632 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-public-tls-certs\") pod \"5fd70346-51cf-44fc-8cea-48ee35deadb0\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915681 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwqj2\" (UniqueName: \"kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-kube-api-access-zwqj2\") pod \"5fd70346-51cf-44fc-8cea-48ee35deadb0\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915716 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-combined-ca-bundle\") pod \"5fd70346-51cf-44fc-8cea-48ee35deadb0\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915772 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config\") pod \"65736cb4-25b2-402e-8dfe-d00b218a274b\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915792 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-combined-ca-bundle\") pod \"65736cb4-25b2-402e-8dfe-d00b218a274b\" (UID: \"65736cb4-25b2-402e-8dfe-d00b218a274b\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915814 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-run-httpd\") pod \"5fd70346-51cf-44fc-8cea-48ee35deadb0\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.915842 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-log-httpd\") pod \"5fd70346-51cf-44fc-8cea-48ee35deadb0\" (UID: \"5fd70346-51cf-44fc-8cea-48ee35deadb0\") " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.916325 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.916342 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9zgd\" (UniqueName: \"kubernetes.io/projected/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-kube-api-access-r9zgd\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.916354 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.916363 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.916373 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.916382 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.932911 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5fd70346-51cf-44fc-8cea-48ee35deadb0" (UID: "5fd70346-51cf-44fc-8cea-48ee35deadb0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.933887 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.937981 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.946269 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5fd70346-51cf-44fc-8cea-48ee35deadb0" (UID: "5fd70346-51cf-44fc-8cea-48ee35deadb0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.968063 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.969904 4780 scope.go:117] "RemoveContainer" containerID="5c9c067c92697e48033b3641b520cfc47f50b10a41d6b3d91152e79157a514bf" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.976036 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5fd70346-51cf-44fc-8cea-48ee35deadb0" (UID: "5fd70346-51cf-44fc-8cea-48ee35deadb0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.977136 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65736cb4-25b2-402e-8dfe-d00b218a274b-kube-api-access-rpvh8" (OuterVolumeSpecName: "kube-api-access-rpvh8") pod "65736cb4-25b2-402e-8dfe-d00b218a274b" (UID: "65736cb4-25b2-402e-8dfe-d00b218a274b"). InnerVolumeSpecName "kube-api-access-rpvh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.985450 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-kube-api-access-zwqj2" (OuterVolumeSpecName: "kube-api-access-zwqj2") pod "5fd70346-51cf-44fc-8cea-48ee35deadb0" (UID: "5fd70346-51cf-44fc-8cea-48ee35deadb0"). InnerVolumeSpecName "kube-api-access-zwqj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:05 crc kubenswrapper[4780]: I1205 07:11:05.997179 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65736cb4-25b2-402e-8dfe-d00b218a274b" (UID: "65736cb4-25b2-402e-8dfe-d00b218a274b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.025121 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-nb\") pod \"b7f1d4f8-b32f-4448-8db1-ff7299256169\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.033671 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jbmk\" (UniqueName: \"kubernetes.io/projected/b7f1d4f8-b32f-4448-8db1-ff7299256169-kube-api-access-5jbmk\") pod \"b7f1d4f8-b32f-4448-8db1-ff7299256169\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.033721 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-nova-novncproxy-tls-certs\") pod \"828f916b-54ac-4498-b1a7-139334944d9b\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.033768 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-vencrypt-tls-certs\") pod \"828f916b-54ac-4498-b1a7-139334944d9b\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.033807 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-config\") pod \"b7f1d4f8-b32f-4448-8db1-ff7299256169\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.033830 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-swift-storage-0\") pod \"b7f1d4f8-b32f-4448-8db1-ff7299256169\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.033953 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-sb\") pod \"b7f1d4f8-b32f-4448-8db1-ff7299256169\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.034027 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5zgh\" (UniqueName: \"kubernetes.io/projected/828f916b-54ac-4498-b1a7-139334944d9b-kube-api-access-d5zgh\") pod \"828f916b-54ac-4498-b1a7-139334944d9b\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.034097 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-svc\") pod \"b7f1d4f8-b32f-4448-8db1-ff7299256169\" (UID: \"b7f1d4f8-b32f-4448-8db1-ff7299256169\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.034122 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-config-data\") pod \"828f916b-54ac-4498-b1a7-139334944d9b\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.034148 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-combined-ca-bundle\") pod \"828f916b-54ac-4498-b1a7-139334944d9b\" (UID: \"828f916b-54ac-4498-b1a7-139334944d9b\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.034848 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwqj2\" (UniqueName: \"kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-kube-api-access-zwqj2\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.034868 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.034894 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.034904 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fd70346-51cf-44fc-8cea-48ee35deadb0-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.034912 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpvh8\" (UniqueName: \"kubernetes.io/projected/65736cb4-25b2-402e-8dfe-d00b218a274b-kube-api-access-rpvh8\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.034921 4780 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5fd70346-51cf-44fc-8cea-48ee35deadb0-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.061562 4780 scope.go:117] "RemoveContainer" containerID="eb3b7412e25e8f35afb87b1a111e1efdac1f6846e9ac48835f6a92743cf44e0b" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.091492 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-54wt4_2fb4032b-ac6a-46ea-b301-500bf63d3518/openstack-network-exporter/0.log" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.091849 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.120741 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828f916b-54ac-4498-b1a7-139334944d9b-kube-api-access-d5zgh" (OuterVolumeSpecName: "kube-api-access-d5zgh") pod "828f916b-54ac-4498-b1a7-139334944d9b" (UID: "828f916b-54ac-4498-b1a7-139334944d9b"). InnerVolumeSpecName "kube-api-access-d5zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.129066 4780 scope.go:117] "RemoveContainer" containerID="ab7e9427aa59f58e7de8e8d8a84a95133b083fbe1830a42a48c9086af76847dd" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.142694 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z68p\" (UniqueName: \"kubernetes.io/projected/2fb4032b-ac6a-46ea-b301-500bf63d3518-kube-api-access-5z68p\") pod \"2fb4032b-ac6a-46ea-b301-500bf63d3518\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.142825 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fb4032b-ac6a-46ea-b301-500bf63d3518-config\") pod \"2fb4032b-ac6a-46ea-b301-500bf63d3518\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.143140 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-combined-ca-bundle\") pod \"2fb4032b-ac6a-46ea-b301-500bf63d3518\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.143281 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovn-rundir\") pod \"2fb4032b-ac6a-46ea-b301-500bf63d3518\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.143327 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-metrics-certs-tls-certs\") pod \"2fb4032b-ac6a-46ea-b301-500bf63d3518\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.143381 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovs-rundir\") pod \"2fb4032b-ac6a-46ea-b301-500bf63d3518\" (UID: \"2fb4032b-ac6a-46ea-b301-500bf63d3518\") " Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.145176 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5zgh\" (UniqueName: \"kubernetes.io/projected/828f916b-54ac-4498-b1a7-139334944d9b-kube-api-access-d5zgh\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.145295 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "2fb4032b-ac6a-46ea-b301-500bf63d3518" (UID: "2fb4032b-ac6a-46ea-b301-500bf63d3518"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.153825 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "2fb4032b-ac6a-46ea-b301-500bf63d3518" (UID: "2fb4032b-ac6a-46ea-b301-500bf63d3518"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.156313 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fb4032b-ac6a-46ea-b301-500bf63d3518-config" (OuterVolumeSpecName: "config") pod "2fb4032b-ac6a-46ea-b301-500bf63d3518" (UID: "2fb4032b-ac6a-46ea-b301-500bf63d3518"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.158013 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb4032b-ac6a-46ea-b301-500bf63d3518-kube-api-access-5z68p" (OuterVolumeSpecName: "kube-api-access-5z68p") pod "2fb4032b-ac6a-46ea-b301-500bf63d3518" (UID: "2fb4032b-ac6a-46ea-b301-500bf63d3518"). InnerVolumeSpecName "kube-api-access-5z68p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.158487 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "828f916b-54ac-4498-b1a7-139334944d9b" (UID: "828f916b-54ac-4498-b1a7-139334944d9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.213507 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7f1d4f8-b32f-4448-8db1-ff7299256169-kube-api-access-5jbmk" (OuterVolumeSpecName: "kube-api-access-5jbmk") pod "b7f1d4f8-b32f-4448-8db1-ff7299256169" (UID: "b7f1d4f8-b32f-4448-8db1-ff7299256169"). InnerVolumeSpecName "kube-api-access-5jbmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.216129 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "65736cb4-25b2-402e-8dfe-d00b218a274b" (UID: "65736cb4-25b2-402e-8dfe-d00b218a274b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.237342 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56db168c-9500-4a17-9cd0-1bcfeeee167b" path="/var/lib/kubelet/pods/56db168c-9500-4a17-9cd0-1bcfeeee167b/volumes" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.247920 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3" path="/var/lib/kubelet/pods/9a3bc8e1-c401-4c09-a8bd-91fcb98e96e3/volumes" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.249351 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" path="/var/lib/kubelet/pods/aa3ab37e-e167-44dd-985c-c8f6b067cfdd/volumes" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.254022 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.254587 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.254625 4780 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2fb4032b-ac6a-46ea-b301-500bf63d3518-ovs-rundir\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.254647 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jbmk\" (UniqueName: \"kubernetes.io/projected/b7f1d4f8-b32f-4448-8db1-ff7299256169-kube-api-access-5jbmk\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.254662 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z68p\" (UniqueName: \"kubernetes.io/projected/2fb4032b-ac6a-46ea-b301-500bf63d3518-kube-api-access-5z68p\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.254698 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fb4032b-ac6a-46ea-b301-500bf63d3518-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.254717 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.291316 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovs-vswitchd" containerID="cri-o://5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" gracePeriod=29 Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.327386 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.356986 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.358078 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "fee336d1-2c89-4ccb-b6ea-69a4697b7a29" (UID: "fee336d1-2c89-4ccb-b6ea-69a4697b7a29"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: E1205 07:11:06.358079 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20 is running failed: container process not found" containerID="e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.361066 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b7f1d4f8-b32f-4448-8db1-ff7299256169" (UID: "b7f1d4f8-b32f-4448-8db1-ff7299256169"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: E1205 07:11:06.361177 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20 is running failed: container process not found" containerID="e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 07:11:06 crc kubenswrapper[4780]: E1205 07:11:06.362169 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20 is running failed: container process not found" containerID="e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 07:11:06 crc kubenswrapper[4780]: E1205 07:11:06.362211 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="33af7252-1228-4051-bab0-cfcaee04fe1d" containerName="nova-scheduler-scheduler" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.363061 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "828f916b-54ac-4498-b1a7-139334944d9b" (UID: "828f916b-54ac-4498-b1a7-139334944d9b"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.468125 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5fd70346-51cf-44fc-8cea-48ee35deadb0" (UID: "5fd70346-51cf-44fc-8cea-48ee35deadb0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.468175 4780 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.471029 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.471167 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.480600 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-config-data" (OuterVolumeSpecName: "config-data") pod "5fd70346-51cf-44fc-8cea-48ee35deadb0" (UID: "5fd70346-51cf-44fc-8cea-48ee35deadb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.500652 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b7f1d4f8-b32f-4448-8db1-ff7299256169" (UID: "b7f1d4f8-b32f-4448-8db1-ff7299256169"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.515439 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7f1d4f8-b32f-4448-8db1-ff7299256169" (UID: "b7f1d4f8-b32f-4448-8db1-ff7299256169"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.527606 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "828f916b-54ac-4498-b1a7-139334944d9b" (UID: "828f916b-54ac-4498-b1a7-139334944d9b"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.546419 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fd70346-51cf-44fc-8cea-48ee35deadb0" (UID: "5fd70346-51cf-44fc-8cea-48ee35deadb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.556607 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b7f1d4f8-b32f-4448-8db1-ff7299256169" (UID: "b7f1d4f8-b32f-4448-8db1-ff7299256169"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.558962 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5fd70346-51cf-44fc-8cea-48ee35deadb0" (UID: "5fd70346-51cf-44fc-8cea-48ee35deadb0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: E1205 07:11:06.573672 4780 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Dec 05 07:11:06 crc kubenswrapper[4780]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 05 07:11:06 crc kubenswrapper[4780]: + source /usr/local/bin/container-scripts/functions Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNBridge=br-int Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNRemote=tcp:localhost:6642 Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNEncapType=geneve Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNAvailabilityZones= Dec 05 07:11:06 crc kubenswrapper[4780]: ++ EnableChassisAsGateway=true Dec 05 07:11:06 crc kubenswrapper[4780]: ++ PhysicalNetworks= Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNHostName= Dec 05 07:11:06 crc kubenswrapper[4780]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 05 07:11:06 crc kubenswrapper[4780]: ++ ovs_dir=/var/lib/openvswitch Dec 05 07:11:06 crc kubenswrapper[4780]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 05 07:11:06 crc kubenswrapper[4780]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 05 07:11:06 crc kubenswrapper[4780]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + sleep 0.5 Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + sleep 0.5 Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + sleep 0.5 Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + cleanup_ovsdb_server_semaphore Dec 05 07:11:06 crc kubenswrapper[4780]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 07:11:06 crc kubenswrapper[4780]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 05 07:11:06 crc kubenswrapper[4780]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-lq2sf" message=< Dec 05 07:11:06 crc kubenswrapper[4780]: Exiting ovsdb-server (5) [ OK ] Dec 05 07:11:06 crc kubenswrapper[4780]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 05 07:11:06 crc kubenswrapper[4780]: + source /usr/local/bin/container-scripts/functions Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNBridge=br-int Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNRemote=tcp:localhost:6642 Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNEncapType=geneve Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNAvailabilityZones= Dec 05 07:11:06 crc kubenswrapper[4780]: ++ EnableChassisAsGateway=true Dec 05 07:11:06 crc kubenswrapper[4780]: ++ PhysicalNetworks= Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNHostName= Dec 05 07:11:06 crc kubenswrapper[4780]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 05 07:11:06 crc kubenswrapper[4780]: ++ ovs_dir=/var/lib/openvswitch Dec 05 07:11:06 crc kubenswrapper[4780]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 05 07:11:06 crc kubenswrapper[4780]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 05 07:11:06 crc kubenswrapper[4780]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + sleep 0.5 Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + sleep 0.5 Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + sleep 0.5 Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + cleanup_ovsdb_server_semaphore Dec 05 07:11:06 crc kubenswrapper[4780]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 07:11:06 crc kubenswrapper[4780]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 05 07:11:06 crc kubenswrapper[4780]: > Dec 05 07:11:06 crc kubenswrapper[4780]: E1205 07:11:06.574141 4780 kuberuntime_container.go:691] "PreStop hook failed" err=< Dec 05 07:11:06 crc kubenswrapper[4780]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 05 07:11:06 crc kubenswrapper[4780]: + source /usr/local/bin/container-scripts/functions Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNBridge=br-int Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNRemote=tcp:localhost:6642 Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNEncapType=geneve Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNAvailabilityZones= Dec 05 07:11:06 crc kubenswrapper[4780]: ++ EnableChassisAsGateway=true Dec 05 07:11:06 crc kubenswrapper[4780]: ++ PhysicalNetworks= Dec 05 07:11:06 crc kubenswrapper[4780]: ++ OVNHostName= Dec 05 07:11:06 crc kubenswrapper[4780]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 05 07:11:06 crc kubenswrapper[4780]: ++ ovs_dir=/var/lib/openvswitch Dec 05 07:11:06 crc kubenswrapper[4780]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 05 07:11:06 crc kubenswrapper[4780]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 05 07:11:06 crc kubenswrapper[4780]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + sleep 0.5 Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + sleep 0.5 Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + sleep 0.5 Dec 05 07:11:06 crc kubenswrapper[4780]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 05 07:11:06 crc kubenswrapper[4780]: + cleanup_ovsdb_server_semaphore Dec 05 07:11:06 crc kubenswrapper[4780]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 05 07:11:06 crc kubenswrapper[4780]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 05 07:11:06 crc kubenswrapper[4780]: > pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" containerID="cri-o://7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.574180 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" containerID="cri-o://7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" gracePeriod=28 Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.576989 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.577015 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.577025 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.577034 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.577043 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.577051 4780 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.577062 4780 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.577070 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fd70346-51cf-44fc-8cea-48ee35deadb0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.579169 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fb4032b-ac6a-46ea-b301-500bf63d3518" (UID: "2fb4032b-ac6a-46ea-b301-500bf63d3518"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.582482 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "fee336d1-2c89-4ccb-b6ea-69a4697b7a29" (UID: "fee336d1-2c89-4ccb-b6ea-69a4697b7a29"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.592122 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-config-data" (OuterVolumeSpecName: "config-data") pod "828f916b-54ac-4498-b1a7-139334944d9b" (UID: "828f916b-54ac-4498-b1a7-139334944d9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.592723 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2fb4032b-ac6a-46ea-b301-500bf63d3518" (UID: "2fb4032b-ac6a-46ea-b301-500bf63d3518"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.600460 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "65736cb4-25b2-402e-8dfe-d00b218a274b" (UID: "65736cb4-25b2-402e-8dfe-d00b218a274b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.618463 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-config" (OuterVolumeSpecName: "config") pod "b7f1d4f8-b32f-4448-8db1-ff7299256169" (UID: "b7f1d4f8-b32f-4448-8db1-ff7299256169"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.679249 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828f916b-54ac-4498-b1a7-139334944d9b-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.679287 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.679297 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/65736cb4-25b2-402e-8dfe-d00b218a274b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.679307 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f1d4f8-b32f-4448-8db1-ff7299256169-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.679316 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee336d1-2c89-4ccb-b6ea-69a4697b7a29-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.679325 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb4032b-ac6a-46ea-b301-500bf63d3518-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.775775 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican796f-account-delete-h5ds7"] Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.775822 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance4824-account-delete-4st4x"] Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.775837 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutrona927-account-delete-5chq6"] Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.860731 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.862535 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" event={"ID":"b7f1d4f8-b32f-4448-8db1-ff7299256169","Type":"ContainerDied","Data":"41a94c6ad8e328e7b0b5a2bfb216a6dd21eadb38d82c4a28034f6ea978d7c63f"} Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.862594 4780 scope.go:117] "RemoveContainer" containerID="c66ff948bbb0223dc5ef04d22b4b1a8ff3bff1768e3f1c9bbb557cbcf9b1c5fc" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.872949 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance4824-account-delete-4st4x" event={"ID":"202ef989-0cbf-4120-8621-11201cfe3d64","Type":"ContainerStarted","Data":"b462c90b56a40d7557ce2f206ed6edc5449e261c1776ec03e6ec26fe44bd4b8c"} Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.913727 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.954292 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": read tcp 10.217.0.2:49316->10.217.0.200:8775: read: connection reset by peer" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.954304 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": read tcp 10.217.0.2:49330->10.217.0.200:8775: read: connection reset by peer" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.956794 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-54wt4_2fb4032b-ac6a-46ea-b301-500bf63d3518/openstack-network-exporter/0.log" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.956988 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-54wt4" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.957035 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-54wt4" event={"ID":"2fb4032b-ac6a-46ea-b301-500bf63d3518","Type":"ContainerDied","Data":"c6b5bf27169922f945a1202d45f28ca83ef9e4658d2834e18b51e730f4ed0b4e"} Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.964290 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": read tcp 10.217.0.2:55144->10.217.0.164:8776: read: connection reset by peer" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.966348 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.966369 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"828f916b-54ac-4498-b1a7-139334944d9b","Type":"ContainerDied","Data":"e94759d132052653acf4e7af754c43f39443db4761339eb8878325c032b9c1d6"} Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.988278 4780 generic.go:334] "Generic (PLEG): container finished" podID="33af7252-1228-4051-bab0-cfcaee04fe1d" containerID="e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20" exitCode=0 Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.988345 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"33af7252-1228-4051-bab0-cfcaee04fe1d","Type":"ContainerDied","Data":"e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20"} Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.995277 4780 generic.go:334] "Generic (PLEG): container finished" podID="621ea4dd-7bc5-4404-9369-1cd99335155d" containerID="d68e53a20f7b0772cf31f43fd6387417cf438c45cf97337ca3c20b74894ceb64" exitCode=0 Dec 05 07:11:06 crc kubenswrapper[4780]: I1205 07:11:06.995356 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"621ea4dd-7bc5-4404-9369-1cd99335155d","Type":"ContainerDied","Data":"d68e53a20f7b0772cf31f43fd6387417cf438c45cf97337ca3c20b74894ceb64"} Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.009111 4780 generic.go:334] "Generic (PLEG): container finished" podID="52793d91-2b27-4926-9293-78f555401415" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" exitCode=0 Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.009186 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lq2sf" event={"ID":"52793d91-2b27-4926-9293-78f555401415","Type":"ContainerDied","Data":"7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd"} Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.011153 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fee336d1-2c89-4ccb-b6ea-69a4697b7a29","Type":"ContainerDied","Data":"53f5f8eabb090c5ebef04244cba470e7c1cc6d5514edf4ec50de748007a04ec9"} Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.011280 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.026205 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican796f-account-delete-h5ds7" event={"ID":"72765495-c470-41a5-b5a7-423025bdd6a7","Type":"ContainerStarted","Data":"ae01a9cb16792f9fade952477b0f5fe02bbc8f962121d2e0d047909e8a5bbe43"} Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.037288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" event={"ID":"5fd70346-51cf-44fc-8cea-48ee35deadb0","Type":"ContainerDied","Data":"35cf566470a091b9caef81f95df53c04f9fc375d2635c73c36f6ba1a60133602"} Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.037397 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.052480 4780 generic.go:334] "Generic (PLEG): container finished" podID="e9395104-b579-44d5-bbf0-69fe4d17406d" containerID="bbf7ba30828f7305d2c91dd07104e5ee99cdcba79c89362c856ebc2c639710e1" exitCode=0 Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.052561 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9395104-b579-44d5-bbf0-69fe4d17406d","Type":"ContainerDied","Data":"bbf7ba30828f7305d2c91dd07104e5ee99cdcba79c89362c856ebc2c639710e1"} Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.052588 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e9395104-b579-44d5-bbf0-69fe4d17406d","Type":"ContainerDied","Data":"00dea51366f0025d7b786ce52a6c3792f2fea0584a0637161391e0da7fe4d28f"} Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.052602 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00dea51366f0025d7b786ce52a6c3792f2fea0584a0637161391e0da7fe4d28f" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.055505 4780 generic.go:334] "Generic (PLEG): container finished" podID="fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c" containerID="6d62e69774c5587c8e04a48087fe8984cb21a4165f3b57401aaae1dcddc7f33a" exitCode=0 Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.055560 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c","Type":"ContainerDied","Data":"6d62e69774c5587c8e04a48087fe8984cb21a4165f3b57401aaae1dcddc7f33a"} Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.061370 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutrona927-account-delete-5chq6" event={"ID":"574be54a-bbce-4f37-93b1-c9de6f1d0f4e","Type":"ContainerStarted","Data":"fa99f2a2d4d4d8e7399588071249075c46eaaa6972a5dfa29dd9a0119560f10e"} Dec 05 07:11:07 crc kubenswrapper[4780]: E1205 07:11:07.096554 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 05 07:11:07 crc kubenswrapper[4780]: E1205 07:11:07.096616 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data podName:f5032d09-8298-4941-8b4b-0f24a57b8ced nodeName:}" failed. No retries permitted until 2025-12-05 07:11:11.096602998 +0000 UTC m=+1505.166119330 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data") pod "rabbitmq-server-0" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced") : configmap "rabbitmq-config-data" not found Dec 05 07:11:07 crc kubenswrapper[4780]: E1205 07:11:07.096964 4780 configmap.go:193] Couldn't get configMap openstack/ovncontroller-scripts: configmap "ovncontroller-scripts" not found Dec 05 07:11:07 crc kubenswrapper[4780]: E1205 07:11:07.096996 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts podName:52793d91-2b27-4926-9293-78f555401415 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:11.096989348 +0000 UTC m=+1505.166505680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts") pod "ovn-controller-ovs-lq2sf" (UID: "52793d91-2b27-4926-9293-78f555401415") : configmap "ovncontroller-scripts" not found Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.129041 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell079a0-account-delete-dfjw8"] Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.194783 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapiebdd-account-delete-2px9p"] Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.197919 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder1be5-account-delete-6mpnc"] Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.321403 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-799c48f5f4-sm7kz" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:37458->10.217.0.155:9311: read: connection reset by peer" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.321745 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-799c48f5f4-sm7kz" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:37442->10.217.0.155:9311: read: connection reset by peer" Dec 05 07:11:07 crc kubenswrapper[4780]: W1205 07:11:07.323790 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b6e1d3b_503e_49c8_8d33_bcaae571525c.slice/crio-5f8287036f1e695574b04f10fd9553469b310f47439e97038098bede213305c9 WatchSource:0}: Error finding container 5f8287036f1e695574b04f10fd9553469b310f47439e97038098bede213305c9: Status 404 returned error can't find the container with id 5f8287036f1e695574b04f10fd9553469b310f47439e97038098bede213305c9 Dec 05 07:11:07 crc kubenswrapper[4780]: W1205 07:11:07.324605 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52234708_ef2b_40c7_af1b_61e1890dd674.slice/crio-8f280b141a81b2c397d24ff42404020ef0e106f0121446dbc46d2b0de4544e72 WatchSource:0}: Error finding container 8f280b141a81b2c397d24ff42404020ef0e106f0121446dbc46d2b0de4544e72: Status 404 returned error can't find the container with id 8f280b141a81b2c397d24ff42404020ef0e106f0121446dbc46d2b0de4544e72 Dec 05 07:11:07 crc kubenswrapper[4780]: E1205 07:11:07.424432 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 07:11:07 crc kubenswrapper[4780]: E1205 07:11:07.427350 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 07:11:07 crc kubenswrapper[4780]: E1205 07:11:07.461007 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 07:11:07 crc kubenswrapper[4780]: E1205 07:11:07.461101 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="29f97591-4528-4ed0-918c-b6de191c452a" containerName="nova-cell0-conductor-conductor" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.586122 4780 scope.go:117] "RemoveContainer" containerID="7678f8c76dd9a388582a2e246f59d292a26940453c846b3140848ca635c8c94c" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.678507 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.764705 4780 scope.go:117] "RemoveContainer" containerID="b9808fa835d43d815f703095686bccb9a6eedb6aab78ee5755aeddb342d50d7a" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.788444 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.803191 4780 scope.go:117] "RemoveContainer" containerID="614162aaf287785a7cbbc0f2442a28903aec8fa26e737ecff5c47ce7458a1617" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.815552 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data-custom\") pod \"e9395104-b579-44d5-bbf0-69fe4d17406d\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.815669 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9395104-b579-44d5-bbf0-69fe4d17406d-etc-machine-id\") pod \"e9395104-b579-44d5-bbf0-69fe4d17406d\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.815721 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js7f6\" (UniqueName: \"kubernetes.io/projected/e9395104-b579-44d5-bbf0-69fe4d17406d-kube-api-access-js7f6\") pod \"e9395104-b579-44d5-bbf0-69fe4d17406d\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.815793 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-scripts\") pod \"e9395104-b579-44d5-bbf0-69fe4d17406d\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.815939 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-combined-ca-bundle\") pod \"e9395104-b579-44d5-bbf0-69fe4d17406d\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.815979 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data\") pod \"e9395104-b579-44d5-bbf0-69fe4d17406d\" (UID: \"e9395104-b579-44d5-bbf0-69fe4d17406d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.817751 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9395104-b579-44d5-bbf0-69fe4d17406d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e9395104-b579-44d5-bbf0-69fe4d17406d" (UID: "e9395104-b579-44d5-bbf0-69fe4d17406d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.846032 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e9395104-b579-44d5-bbf0-69fe4d17406d" (UID: "e9395104-b579-44d5-bbf0-69fe4d17406d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.846414 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-scripts" (OuterVolumeSpecName: "scripts") pod "e9395104-b579-44d5-bbf0-69fe4d17406d" (UID: "e9395104-b579-44d5-bbf0-69fe4d17406d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.858707 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9395104-b579-44d5-bbf0-69fe4d17406d-kube-api-access-js7f6" (OuterVolumeSpecName: "kube-api-access-js7f6") pod "e9395104-b579-44d5-bbf0-69fe4d17406d" (UID: "e9395104-b579-44d5-bbf0-69fe4d17406d"). InnerVolumeSpecName "kube-api-access-js7f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.922285 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"621ea4dd-7bc5-4404-9369-1cd99335155d\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.922371 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-operator-scripts\") pod \"621ea4dd-7bc5-4404-9369-1cd99335155d\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.922453 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-galera-tls-certs\") pod \"621ea4dd-7bc5-4404-9369-1cd99335155d\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.922566 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkqsv\" (UniqueName: \"kubernetes.io/projected/621ea4dd-7bc5-4404-9369-1cd99335155d-kube-api-access-mkqsv\") pod \"621ea4dd-7bc5-4404-9369-1cd99335155d\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.922612 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-kolla-config\") pod \"621ea4dd-7bc5-4404-9369-1cd99335155d\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.922643 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-generated\") pod \"621ea4dd-7bc5-4404-9369-1cd99335155d\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.922684 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-combined-ca-bundle\") pod \"621ea4dd-7bc5-4404-9369-1cd99335155d\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.922701 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-default\") pod \"621ea4dd-7bc5-4404-9369-1cd99335155d\" (UID: \"621ea4dd-7bc5-4404-9369-1cd99335155d\") " Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.923156 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.923172 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e9395104-b579-44d5-bbf0-69fe4d17406d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.923181 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js7f6\" (UniqueName: \"kubernetes.io/projected/e9395104-b579-44d5-bbf0-69fe4d17406d-kube-api-access-js7f6\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.923190 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.924653 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "621ea4dd-7bc5-4404-9369-1cd99335155d" (UID: "621ea4dd-7bc5-4404-9369-1cd99335155d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.925399 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "621ea4dd-7bc5-4404-9369-1cd99335155d" (UID: "621ea4dd-7bc5-4404-9369-1cd99335155d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.926343 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "621ea4dd-7bc5-4404-9369-1cd99335155d" (UID: "621ea4dd-7bc5-4404-9369-1cd99335155d"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.934030 4780 scope.go:117] "RemoveContainer" containerID="059a93cd192837f3113a8d0e3807d7b3f8ab87f622645af37531a78b78d5d7f8" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.935965 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "621ea4dd-7bc5-4404-9369-1cd99335155d" (UID: "621ea4dd-7bc5-4404-9369-1cd99335155d"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.947559 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621ea4dd-7bc5-4404-9369-1cd99335155d-kube-api-access-mkqsv" (OuterVolumeSpecName: "kube-api-access-mkqsv") pod "621ea4dd-7bc5-4404-9369-1cd99335155d" (UID: "621ea4dd-7bc5-4404-9369-1cd99335155d"). InnerVolumeSpecName "kube-api-access-mkqsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:07 crc kubenswrapper[4780]: I1205 07:11:07.950503 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.004263 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.008179 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-54wt4"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.023142 4780 scope.go:117] "RemoveContainer" containerID="71729bfa39e43aff8b7d4b4b743bda8c1770fd26a85d5b0e6a9fae0194a3feb3" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.024614 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.024632 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkqsv\" (UniqueName: \"kubernetes.io/projected/621ea4dd-7bc5-4404-9369-1cd99335155d-kube-api-access-mkqsv\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.024644 4780 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.024656 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.024667 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/621ea4dd-7bc5-4404-9369-1cd99335155d-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.030575 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.042836 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "621ea4dd-7bc5-4404-9369-1cd99335155d" (UID: "621ea4dd-7bc5-4404-9369-1cd99335155d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.056084 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-54wt4"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.093958 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.126523 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-config-data\") pod \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.126584 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-combined-ca-bundle\") pod \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.126644 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gt48\" (UniqueName: \"kubernetes.io/projected/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-kube-api-access-9gt48\") pod \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\" (UID: \"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.126677 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-config-data\") pod \"33af7252-1228-4051-bab0-cfcaee04fe1d\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.126704 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk8d9\" (UniqueName: \"kubernetes.io/projected/33af7252-1228-4051-bab0-cfcaee04fe1d-kube-api-access-sk8d9\") pod \"33af7252-1228-4051-bab0-cfcaee04fe1d\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.126816 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-combined-ca-bundle\") pod \"33af7252-1228-4051-bab0-cfcaee04fe1d\" (UID: \"33af7252-1228-4051-bab0-cfcaee04fe1d\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.129040 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.134671 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.145575 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.170350 4780 scope.go:117] "RemoveContainer" containerID="65ffda40cf80de35ae936a6d650ef297fd76dee5044c69bbd6c7b2b5e327da96" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.170674 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.171410 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb4032b-ac6a-46ea-b301-500bf63d3518" path="/var/lib/kubelet/pods/2fb4032b-ac6a-46ea-b301-500bf63d3518/volumes" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.172705 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65736cb4-25b2-402e-8dfe-d00b218a274b" path="/var/lib/kubelet/pods/65736cb4-25b2-402e-8dfe-d00b218a274b/volumes" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.186411 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.197363 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33af7252-1228-4051-bab0-cfcaee04fe1d-kube-api-access-sk8d9" (OuterVolumeSpecName: "kube-api-access-sk8d9") pod "33af7252-1228-4051-bab0-cfcaee04fe1d" (UID: "33af7252-1228-4051-bab0-cfcaee04fe1d"). InnerVolumeSpecName "kube-api-access-sk8d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.201780 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.202219 4780 generic.go:334] "Generic (PLEG): container finished" podID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerID="5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2" exitCode=0 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.204287 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.228381 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-kube-api-access-9gt48" (OuterVolumeSpecName: "kube-api-access-9gt48") pod "fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c" (UID: "fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c"). InnerVolumeSpecName "kube-api-access-9gt48". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.233608 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj4v8\" (UniqueName: \"kubernetes.io/projected/2a294e09-ff41-4fcc-81f4-2a674c77c239-kube-api-access-vj4v8\") pod \"2a294e09-ff41-4fcc-81f4-2a674c77c239\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.233749 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-logs\") pod \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.233821 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-public-tls-certs\") pod \"2a294e09-ff41-4fcc-81f4-2a674c77c239\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.233869 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-scripts\") pod \"2a294e09-ff41-4fcc-81f4-2a674c77c239\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.233918 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s6tb\" (UniqueName: \"kubernetes.io/projected/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-kube-api-access-9s6tb\") pod \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.233950 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"2a294e09-ff41-4fcc-81f4-2a674c77c239\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.234072 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-combined-ca-bundle\") pod \"2a294e09-ff41-4fcc-81f4-2a674c77c239\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.234113 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-config-data\") pod \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.234157 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-logs\") pod \"2a294e09-ff41-4fcc-81f4-2a674c77c239\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.234212 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-config-data\") pod \"2a294e09-ff41-4fcc-81f4-2a674c77c239\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.234241 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-httpd-run\") pod \"2a294e09-ff41-4fcc-81f4-2a674c77c239\" (UID: \"2a294e09-ff41-4fcc-81f4-2a674c77c239\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.234282 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-combined-ca-bundle\") pod \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.234342 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-nova-metadata-tls-certs\") pod \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\" (UID: \"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.241505 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gt48\" (UniqueName: \"kubernetes.io/projected/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-kube-api-access-9gt48\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.241543 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk8d9\" (UniqueName: \"kubernetes.io/projected/33af7252-1228-4051-bab0-cfcaee04fe1d-kube-api-access-sk8d9\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.245769 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-logs" (OuterVolumeSpecName: "logs") pod "2a294e09-ff41-4fcc-81f4-2a674c77c239" (UID: "2a294e09-ff41-4fcc-81f4-2a674c77c239"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.249563 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-logs" (OuterVolumeSpecName: "logs") pod "c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" (UID: "c381b4ec-8b36-4a3d-8e07-dbbc3a021f11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.254339 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2a294e09-ff41-4fcc-81f4-2a674c77c239" (UID: "2a294e09-ff41-4fcc-81f4-2a674c77c239"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.259469 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.259518 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.259532 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0f8b72a-b08b-4c2f-98dc-242016b6f846","Type":"ContainerDied","Data":"5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.259557 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0f8b72a-b08b-4c2f-98dc-242016b6f846","Type":"ContainerDied","Data":"2bb25b38c07985e1cebe5e146e01280477a7eef46395ee15936ece96d7e47724"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.259570 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.294671 4780 generic.go:334] "Generic (PLEG): container finished" podID="43c681b8-252b-4d1a-8293-27528bc83ed8" containerID="64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb" exitCode=0 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.295065 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43c681b8-252b-4d1a-8293-27528bc83ed8","Type":"ContainerDied","Data":"64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.295203 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43c681b8-252b-4d1a-8293-27528bc83ed8","Type":"ContainerDied","Data":"af1117c2af1193882b9ea3f194b375432d358222d027e59524cbb53c4a965fd5"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.295413 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.323007 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-scripts" (OuterVolumeSpecName: "scripts") pod "2a294e09-ff41-4fcc-81f4-2a674c77c239" (UID: "2a294e09-ff41-4fcc-81f4-2a674c77c239"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.325213 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapiebdd-account-delete-2px9p" event={"ID":"52234708-ef2b-40c7-af1b-61e1890dd674","Type":"ContainerStarted","Data":"8f280b141a81b2c397d24ff42404020ef0e106f0121446dbc46d2b0de4544e72"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.340149 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-58fb69b8bc-qmkp5"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.343204 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-combined-ca-bundle\") pod \"43c681b8-252b-4d1a-8293-27528bc83ed8\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345259 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-scripts\") pod \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345303 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-combined-ca-bundle\") pod \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345368 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-logs\") pod \"43c681b8-252b-4d1a-8293-27528bc83ed8\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345433 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-config-data\") pod \"43c681b8-252b-4d1a-8293-27528bc83ed8\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345496 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data-custom\") pod \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345590 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ff5k\" (UniqueName: \"kubernetes.io/projected/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-kube-api-access-4ff5k\") pod \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345620 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-public-tls-certs\") pod \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345648 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-public-tls-certs\") pod \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345707 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-logs\") pod \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345734 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-internal-tls-certs\") pod \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345783 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-combined-ca-bundle\") pod \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345807 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-scripts\") pod \"43c681b8-252b-4d1a-8293-27528bc83ed8\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.345866 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-logs\") pod \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.347864 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-logs" (OuterVolumeSpecName: "logs") pod "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" (UID: "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350143 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-combined-ca-bundle\") pod \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350198 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-config-data\") pod \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350228 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-internal-tls-certs\") pod \"43c681b8-252b-4d1a-8293-27528bc83ed8\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350278 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-internal-tls-certs\") pod \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350326 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-httpd-run\") pod \"43c681b8-252b-4d1a-8293-27528bc83ed8\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350360 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz8cr\" (UniqueName: \"kubernetes.io/projected/e0f8b72a-b08b-4c2f-98dc-242016b6f846-kube-api-access-wz8cr\") pod \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350400 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whpfj\" (UniqueName: \"kubernetes.io/projected/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-kube-api-access-whpfj\") pod \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350437 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"43c681b8-252b-4d1a-8293-27528bc83ed8\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350461 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-public-tls-certs\") pod \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350463 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-logs" (OuterVolumeSpecName: "logs") pod "cf87b821-f0c0-41df-a1ee-f2c44a09cc82" (UID: "cf87b821-f0c0-41df-a1ee-f2c44a09cc82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350485 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5fj7\" (UniqueName: \"kubernetes.io/projected/43c681b8-252b-4d1a-8293-27528bc83ed8-kube-api-access-g5fj7\") pod \"43c681b8-252b-4d1a-8293-27528bc83ed8\" (UID: \"43c681b8-252b-4d1a-8293-27528bc83ed8\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350535 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-internal-tls-certs\") pod \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350562 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f8b72a-b08b-4c2f-98dc-242016b6f846-logs\") pod \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\" (UID: \"e0f8b72a-b08b-4c2f-98dc-242016b6f846\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350585 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data\") pod \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350616 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-scripts\") pod \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350650 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-etc-machine-id\") pod \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\" (UID: \"cf87b821-f0c0-41df-a1ee-f2c44a09cc82\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.350685 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-config-data\") pod \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\" (UID: \"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7\") " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.352479 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.352514 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.352525 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a294e09-ff41-4fcc-81f4-2a674c77c239-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.352536 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.352547 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.352563 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.360190 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c","Type":"ContainerDied","Data":"8a66613de910b1fa6973c4549f30bc0960c3659829952e1502bbcddbd917cefb"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.360207 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-logs" (OuterVolumeSpecName: "logs") pod "43c681b8-252b-4d1a-8293-27528bc83ed8" (UID: "43c681b8-252b-4d1a-8293-27528bc83ed8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.360335 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.360902 4780 scope.go:117] "RemoveContainer" containerID="5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.386930 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "2a294e09-ff41-4fcc-81f4-2a674c77c239" (UID: "2a294e09-ff41-4fcc-81f4-2a674c77c239"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.389671 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-58fb69b8bc-qmkp5"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.396457 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-kube-api-access-9s6tb" (OuterVolumeSpecName: "kube-api-access-9s6tb") pod "c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" (UID: "c381b4ec-8b36-4a3d-8e07-dbbc3a021f11"). InnerVolumeSpecName "kube-api-access-9s6tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.396526 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a294e09-ff41-4fcc-81f4-2a674c77c239-kube-api-access-vj4v8" (OuterVolumeSpecName: "kube-api-access-vj4v8") pod "2a294e09-ff41-4fcc-81f4-2a674c77c239" (UID: "2a294e09-ff41-4fcc-81f4-2a674c77c239"). InnerVolumeSpecName "kube-api-access-vj4v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.397827 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "43c681b8-252b-4d1a-8293-27528bc83ed8" (UID: "43c681b8-252b-4d1a-8293-27528bc83ed8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.399363 4780 generic.go:334] "Generic (PLEG): container finished" podID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerID="d247be1b147a98f7d05a4bb3c8635747189f02eca874ffceb138264c83747cc4" exitCode=0 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.399431 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-799c48f5f4-sm7kz" event={"ID":"a6b8df94-a979-4c1a-bffd-5f5052f0ad12","Type":"ContainerDied","Data":"d247be1b147a98f7d05a4bb3c8635747189f02eca874ffceb138264c83747cc4"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.400010 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-scripts" (OuterVolumeSpecName: "scripts") pod "43c681b8-252b-4d1a-8293-27528bc83ed8" (UID: "43c681b8-252b-4d1a-8293-27528bc83ed8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.400277 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-kube-api-access-4ff5k" (OuterVolumeSpecName: "kube-api-access-4ff5k") pod "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" (UID: "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7"). InnerVolumeSpecName "kube-api-access-4ff5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.401930 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cf87b821-f0c0-41df-a1ee-f2c44a09cc82" (UID: "cf87b821-f0c0-41df-a1ee-f2c44a09cc82"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.403662 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f8b72a-b08b-4c2f-98dc-242016b6f846-logs" (OuterVolumeSpecName: "logs") pod "e0f8b72a-b08b-4c2f-98dc-242016b6f846" (UID: "e0f8b72a-b08b-4c2f-98dc-242016b6f846"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.417106 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-scripts" (OuterVolumeSpecName: "scripts") pod "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" (UID: "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.422035 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell079a0-account-delete-dfjw8" event={"ID":"6b6e1d3b-503e-49c8-8d33-bcaae571525c","Type":"ContainerStarted","Data":"5f8287036f1e695574b04f10fd9553469b310f47439e97038098bede213305c9"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.431748 4780 generic.go:334] "Generic (PLEG): container finished" podID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerID="fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb" exitCode=0 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.431822 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf87b821-f0c0-41df-a1ee-f2c44a09cc82","Type":"ContainerDied","Data":"fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.431851 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf87b821-f0c0-41df-a1ee-f2c44a09cc82","Type":"ContainerDied","Data":"d97fe25a6142b5642b1114e6fd451ca14ff528856917d016baa9dc2ce96c7adc"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.431982 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.432077 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "43c681b8-252b-4d1a-8293-27528bc83ed8" (UID: "43c681b8-252b-4d1a-8293-27528bc83ed8"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.433024 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f8b72a-b08b-4c2f-98dc-242016b6f846-kube-api-access-wz8cr" (OuterVolumeSpecName: "kube-api-access-wz8cr") pod "e0f8b72a-b08b-4c2f-98dc-242016b6f846" (UID: "e0f8b72a-b08b-4c2f-98dc-242016b6f846"). InnerVolumeSpecName "kube-api-access-wz8cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.445157 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-scripts" (OuterVolumeSpecName: "scripts") pod "cf87b821-f0c0-41df-a1ee-f2c44a09cc82" (UID: "cf87b821-f0c0-41df-a1ee-f2c44a09cc82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.446561 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cf87b821-f0c0-41df-a1ee-f2c44a09cc82" (UID: "cf87b821-f0c0-41df-a1ee-f2c44a09cc82"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.456464 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.456800 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="ceilometer-central-agent" containerID="cri-o://282e85d0f2ea9e2278d2658c562e6fa9b7d5cb1b13122f0ecb2e5ba8c5f54666" gracePeriod=30 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.456961 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="proxy-httpd" containerID="cri-o://3793732da62f19e7a1e8b9f03d99576883cb03ca232244237185b399ee3c2f70" gracePeriod=30 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.457004 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="sg-core" containerID="cri-o://7b6ccff69e702c06122f20efccc590a8f63a94c28204c42d45a3128606dcedcb" gracePeriod=30 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.457039 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="ceilometer-notification-agent" containerID="cri-o://6729e14dde9f78be13ae40bdda9e3ae569261b8bcd8c18d065c49f17af80f082" gracePeriod=30 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458505 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ff5k\" (UniqueName: \"kubernetes.io/projected/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-kube-api-access-4ff5k\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458526 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458536 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s6tb\" (UniqueName: \"kubernetes.io/projected/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-kube-api-access-9s6tb\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458557 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458568 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458578 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz8cr\" (UniqueName: \"kubernetes.io/projected/e0f8b72a-b08b-4c2f-98dc-242016b6f846-kube-api-access-wz8cr\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458591 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458601 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f8b72a-b08b-4c2f-98dc-242016b6f846-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458610 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458620 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458629 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458637 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c681b8-252b-4d1a-8293-27528bc83ed8-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458647 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.458656 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj4v8\" (UniqueName: \"kubernetes.io/projected/2a294e09-ff41-4fcc-81f4-2a674c77c239-kube-api-access-vj4v8\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.460079 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-kube-api-access-whpfj" (OuterVolumeSpecName: "kube-api-access-whpfj") pod "cf87b821-f0c0-41df-a1ee-f2c44a09cc82" (UID: "cf87b821-f0c0-41df-a1ee-f2c44a09cc82"). InnerVolumeSpecName "kube-api-access-whpfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.464394 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c681b8-252b-4d1a-8293-27528bc83ed8-kube-api-access-g5fj7" (OuterVolumeSpecName: "kube-api-access-g5fj7") pod "43c681b8-252b-4d1a-8293-27528bc83ed8" (UID: "43c681b8-252b-4d1a-8293-27528bc83ed8"). InnerVolumeSpecName "kube-api-access-g5fj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.468918 4780 generic.go:334] "Generic (PLEG): container finished" podID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" containerID="0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e" exitCode=0 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.468998 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669bccb86b-8cjsq" event={"ID":"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7","Type":"ContainerDied","Data":"0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.469032 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669bccb86b-8cjsq" event={"ID":"d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7","Type":"ContainerDied","Data":"a8681be4c4b6474dc591d34b256be26ca28c2b2b591faad0d1a0ae0a713d826c"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.469137 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669bccb86b-8cjsq" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.474019 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder1be5-account-delete-6mpnc" event={"ID":"9c542de0-85ab-43f2-89ca-fb8a6c19e49d","Type":"ContainerStarted","Data":"fe21a811d663618b9fd149884fbb733768d2c4ab226581c500612dc7d36f0541"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.496460 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.496893 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cfe98bcd-7b01-4246-9879-15ed51cf7a1f" containerName="kube-state-metrics" containerID="cri-o://a959f6bc66c2db1c1600ed04dc5d26591b5e87880b38d5a29268b847e514d376" gracePeriod=30 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.516694 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"33af7252-1228-4051-bab0-cfcaee04fe1d","Type":"ContainerDied","Data":"a01fbb63151c66d8c336572ebcdd5435b5db45c085006249ad45fb44dc0f5052"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.516796 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.564049 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whpfj\" (UniqueName: \"kubernetes.io/projected/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-kube-api-access-whpfj\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.564075 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5fj7\" (UniqueName: \"kubernetes.io/projected/43c681b8-252b-4d1a-8293-27528bc83ed8-kube-api-access-g5fj7\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.601652 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"621ea4dd-7bc5-4404-9369-1cd99335155d","Type":"ContainerDied","Data":"6c4fdbdea601ec90119f264aeaaba1beb2f1841bc6041f4f23023ec7a91c260f"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.602589 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.651349 4780 generic.go:334] "Generic (PLEG): container finished" podID="2a294e09-ff41-4fcc-81f4-2a674c77c239" containerID="0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6" exitCode=0 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.651445 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a294e09-ff41-4fcc-81f4-2a674c77c239","Type":"ContainerDied","Data":"0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.651472 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2a294e09-ff41-4fcc-81f4-2a674c77c239","Type":"ContainerDied","Data":"894447edf40fe1f8908910d228e6331eab2f8f11df72629a8b6996da472579d9"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.651563 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.664403 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.687335 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.689800 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kqdm6"] Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.690873 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.690955 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerName="ovn-northd" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.694007 4780 generic.go:334] "Generic (PLEG): container finished" podID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerID="7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11" exitCode=0 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.694118 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.694160 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11","Type":"ContainerDied","Data":"7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.694205 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c381b4ec-8b36-4a3d-8e07-dbbc3a021f11","Type":"ContainerDied","Data":"33f982db2a13e4efcd01de3cd54fc7ade7eeba53042da8a022ed630d235143fe"} Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.695578 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.736427 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.736648 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="7c32a219-7b72-4302-8cc4-b9f11a672e8d" containerName="memcached" containerID="cri-o://ba48e227be37eb4042698d7ca1593b87f7775504552ded5d9b72f3ab116ccd77" gracePeriod=30 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.757853 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementa463-account-delete-wsgnm"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.780106 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-jjpc8"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.798107 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-j7ntf"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.811946 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-jjpc8"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.817041 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-j7ntf"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.831966 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5c9f9456b6-zflhk"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.832284 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-5c9f9456b6-zflhk" podUID="fb8bb2be-991d-4cb3-b3b9-9175c78019d9" containerName="keystone-api" containerID="cri-o://e2272792c63e1f2159b6320d0d6009da818ee74ca29907f48e07464acca5482a" gracePeriod=30 Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.857940 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.878084 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ccmqk"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.885000 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ccmqk"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.896938 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e7eb-account-create-update-wjj2g"] Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.900439 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.905676 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:08 crc kubenswrapper[4780]: I1205 07:11:08.907122 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e7eb-account-create-update-wjj2g"] Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.909372 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.925381 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.925474 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.939780 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.955357 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:08 crc kubenswrapper[4780]: E1205 07:11:08.955444 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovs-vswitchd" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.280788 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.287102 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:09 crc kubenswrapper[4780]: E1205 07:11:09.289163 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 05 07:11:09 crc kubenswrapper[4780]: E1205 07:11:09.289834 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data podName:1e6efd4f-660c-44e1-bf69-8b1cec6a6e85 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:17.289796824 +0000 UTC m=+1511.359313236 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data") pod "rabbitmq-cell1-server-0" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85") : configmap "rabbitmq-cell1-config-data" not found Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.381651 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.391253 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.665369 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-config-data" (OuterVolumeSpecName: "config-data") pod "fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c" (UID: "fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.676317 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.702239 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.702278 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.711849 4780 generic.go:334] "Generic (PLEG): container finished" podID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" containerID="54efef79e6df78f9c7a79be7c0902ee44a3970e79099cd25bf9047386200ff4c" exitCode=0 Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.712544 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59d58fb65c-nzf5k" event={"ID":"8d9c218c-8cf4-468d-a946-bb14fc0024b0","Type":"ContainerDied","Data":"54efef79e6df78f9c7a79be7c0902ee44a3970e79099cd25bf9047386200ff4c"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.712571 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59d58fb65c-nzf5k" event={"ID":"8d9c218c-8cf4-468d-a946-bb14fc0024b0","Type":"ContainerDied","Data":"d53dc0d388327b410b5f7b526c5655ce1ba3965c6cef72550c990c438f6723b0"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.712584 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d53dc0d388327b410b5f7b526c5655ce1ba3965c6cef72550c990c438f6723b0" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.716213 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa463-account-delete-wsgnm" event={"ID":"c269c975-543e-44e0-ac7a-abf3f7a619dd","Type":"ContainerStarted","Data":"5e2307920b476147e02bff65c8ad2fb39f274b042f6d2f22124c7288a75d6831"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.721995 4780 generic.go:334] "Generic (PLEG): container finished" podID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerID="3793732da62f19e7a1e8b9f03d99576883cb03ca232244237185b399ee3c2f70" exitCode=0 Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.722025 4780 generic.go:334] "Generic (PLEG): container finished" podID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerID="7b6ccff69e702c06122f20efccc590a8f63a94c28204c42d45a3128606dcedcb" exitCode=2 Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.722033 4780 generic.go:334] "Generic (PLEG): container finished" podID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerID="282e85d0f2ea9e2278d2658c562e6fa9b7d5cb1b13122f0ecb2e5ba8c5f54666" exitCode=0 Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.722061 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aca675e-bb76-4588-b998-c26393dd5ab6","Type":"ContainerDied","Data":"3793732da62f19e7a1e8b9f03d99576883cb03ca232244237185b399ee3c2f70"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.722111 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aca675e-bb76-4588-b998-c26393dd5ab6","Type":"ContainerDied","Data":"7b6ccff69e702c06122f20efccc590a8f63a94c28204c42d45a3128606dcedcb"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.722124 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aca675e-bb76-4588-b998-c26393dd5ab6","Type":"ContainerDied","Data":"282e85d0f2ea9e2278d2658c562e6fa9b7d5cb1b13122f0ecb2e5ba8c5f54666"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.724812 4780 generic.go:334] "Generic (PLEG): container finished" podID="29f97591-4528-4ed0-918c-b6de191c452a" containerID="d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a" exitCode=0 Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.724913 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"29f97591-4528-4ed0-918c-b6de191c452a","Type":"ContainerDied","Data":"d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.727237 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell079a0-account-delete-dfjw8" event={"ID":"6b6e1d3b-503e-49c8-8d33-bcaae571525c","Type":"ContainerStarted","Data":"d31b368abd1529e65ca57a055a89980dc4162f66182c47922e838d0205e61994"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.727988 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell079a0-account-delete-dfjw8" secret="" err="secret \"galera-openstack-dockercfg-zv8sw\" not found" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.749665 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-799c48f5f4-sm7kz" event={"ID":"a6b8df94-a979-4c1a-bffd-5f5052f0ad12","Type":"ContainerDied","Data":"691e3590f26e2c0cb63fa1d43d7770b0258ca9a4136427ced2921a76008669bf"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.749712 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="691e3590f26e2c0cb63fa1d43d7770b0258ca9a4136427ced2921a76008669bf" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.752612 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance4824-account-delete-4st4x" event={"ID":"202ef989-0cbf-4120-8621-11201cfe3d64","Type":"ContainerStarted","Data":"6733da8d639a7996b7e7eb99726131bb8bf0c04d0e15d210e53fa012587cc24c"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.757772 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glance4824-account-delete-4st4x" secret="" err="secret \"galera-openstack-dockercfg-zv8sw\" not found" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.771078 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fs2vs" podUID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" containerName="ovn-controller" probeResult="failure" output="command timed out" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.777393 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell079a0-account-delete-dfjw8" podStartSLOduration=7.777365033 podStartE2EDuration="7.777365033s" podCreationTimestamp="2025-12-05 07:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:11:09.750526906 +0000 UTC m=+1503.820043268" watchObservedRunningTime="2025-12-05 07:11:09.777365033 +0000 UTC m=+1503.846881365" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.796429 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder1be5-account-delete-6mpnc" event={"ID":"9c542de0-85ab-43f2-89ca-fb8a6c19e49d","Type":"ContainerStarted","Data":"39bdef87adcda6005206d39eb2fa9b742fb09f571d1b28131f494c011a73a518"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.797920 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance4824-account-delete-4st4x" podStartSLOduration=8.797895898 podStartE2EDuration="8.797895898s" podCreationTimestamp="2025-12-05 07:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:11:09.793761136 +0000 UTC m=+1503.863277488" watchObservedRunningTime="2025-12-05 07:11:09.797895898 +0000 UTC m=+1503.867412500" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.798181 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cinder1be5-account-delete-6mpnc" secret="" err="secret \"galera-openstack-dockercfg-zv8sw\" not found" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.805826 4780 generic.go:334] "Generic (PLEG): container finished" podID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" containerID="21cb52d533dbe56f4988844a69a64aaf8e041956d1ff9074d70672e4e95db8ee" exitCode=0 Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.805930 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" event={"ID":"aa86c0d1-d6cb-4566-b4b3-352c690b0a96","Type":"ContainerDied","Data":"21cb52d533dbe56f4988844a69a64aaf8e041956d1ff9074d70672e4e95db8ee"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.805959 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" event={"ID":"aa86c0d1-d6cb-4566-b4b3-352c690b0a96","Type":"ContainerDied","Data":"59ea98a0657b0d87366cb8050ab1169c2319aa811587ad690f9b0fc67368058c"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.805975 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59ea98a0657b0d87366cb8050ab1169c2319aa811587ad690f9b0fc67368058c" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.813084 4780 generic.go:334] "Generic (PLEG): container finished" podID="72765495-c470-41a5-b5a7-423025bdd6a7" containerID="409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21" exitCode=1 Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.813213 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican796f-account-delete-h5ds7" event={"ID":"72765495-c470-41a5-b5a7-423025bdd6a7","Type":"ContainerDied","Data":"409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21"} Dec 05 07:11:09 crc kubenswrapper[4780]: E1205 07:11:09.814425 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:09 crc kubenswrapper[4780]: E1205 07:11:09.814531 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts podName:6b6e1d3b-503e-49c8-8d33-bcaae571525c nodeName:}" failed. No retries permitted until 2025-12-05 07:11:10.314508788 +0000 UTC m=+1504.384025120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts") pod "novacell079a0-account-delete-dfjw8" (UID: "6b6e1d3b-503e-49c8-8d33-bcaae571525c") : configmap "openstack-scripts" not found Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.815689 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/barbican796f-account-delete-h5ds7" secret="" err="secret \"galera-openstack-dockercfg-zv8sw\" not found" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.815763 4780 scope.go:117] "RemoveContainer" containerID="409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.824941 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder1be5-account-delete-6mpnc" podStartSLOduration=7.8249121200000005 podStartE2EDuration="7.82491212s" podCreationTimestamp="2025-12-05 07:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:11:09.812848773 +0000 UTC m=+1503.882365105" watchObservedRunningTime="2025-12-05 07:11:09.82491212 +0000 UTC m=+1503.894428472" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.825834 4780 generic.go:334] "Generic (PLEG): container finished" podID="cfe98bcd-7b01-4246-9879-15ed51cf7a1f" containerID="a959f6bc66c2db1c1600ed04dc5d26591b5e87880b38d5a29268b847e514d376" exitCode=2 Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.826246 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfe98bcd-7b01-4246-9879-15ed51cf7a1f","Type":"ContainerDied","Data":"a959f6bc66c2db1c1600ed04dc5d26591b5e87880b38d5a29268b847e514d376"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.826326 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfe98bcd-7b01-4246-9879-15ed51cf7a1f","Type":"ContainerDied","Data":"9cee6cdde86ca1b3353b51ca195794d427e521b11f33d6c715100757aed4f996"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.826351 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cee6cdde86ca1b3353b51ca195794d427e521b11f33d6c715100757aed4f996" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.850009 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqdm6" event={"ID":"5356607a-a085-4294-8d0a-22c641259745","Type":"ContainerStarted","Data":"4a9722cf339ad50786faf6f0b377a107fdf0f403c9198b1e08b15f16531b57a8"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.860154 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutrona927-account-delete-5chq6" event={"ID":"574be54a-bbce-4f37-93b1-c9de6f1d0f4e","Type":"ContainerStarted","Data":"530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465"} Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.860664 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/neutrona927-account-delete-5chq6" secret="" err="secret \"galera-openstack-dockercfg-zv8sw\" not found" Dec 05 07:11:09 crc kubenswrapper[4780]: I1205 07:11:09.860701 4780 scope.go:117] "RemoveContainer" containerID="530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465" Dec 05 07:11:09 crc kubenswrapper[4780]: E1205 07:11:09.922109 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:09 crc kubenswrapper[4780]: E1205 07:11:09.922338 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts podName:202ef989-0cbf-4120-8621-11201cfe3d64 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:10.422199733 +0000 UTC m=+1504.491716225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts") pod "glance4824-account-delete-4st4x" (UID: "202ef989-0cbf-4120-8621-11201cfe3d64") : configmap "openstack-scripts" not found Dec 05 07:11:09 crc kubenswrapper[4780]: E1205 07:11:09.923264 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:09 crc kubenswrapper[4780]: E1205 07:11:09.923627 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts podName:9c542de0-85ab-43f2-89ca-fb8a6c19e49d nodeName:}" failed. No retries permitted until 2025-12-05 07:11:10.423609411 +0000 UTC m=+1504.493125743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts") pod "cinder1be5-account-delete-6mpnc" (UID: "9c542de0-85ab-43f2-89ca-fb8a6c19e49d") : configmap "openstack-scripts" not found Dec 05 07:11:09 crc kubenswrapper[4780]: E1205 07:11:09.927426 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:09 crc kubenswrapper[4780]: E1205 07:11:09.927766 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts podName:72765495-c470-41a5-b5a7-423025bdd6a7 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:10.427743263 +0000 UTC m=+1504.497259595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts") pod "barbican796f-account-delete-h5ds7" (UID: "72765495-c470-41a5-b5a7-423025bdd6a7") : configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.025187 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.025267 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts podName:574be54a-bbce-4f37-93b1-c9de6f1d0f4e nodeName:}" failed. No retries permitted until 2025-12-05 07:11:10.525246303 +0000 UTC m=+1504.594762635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts") pod "neutrona927-account-delete-5chq6" (UID: "574be54a-bbce-4f37-93b1-c9de6f1d0f4e") : configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.048116 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c" (UID: "fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.053054 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9395104-b579-44d5-bbf0-69fe4d17406d" (UID: "e9395104-b579-44d5-bbf0-69fe4d17406d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.127985 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.128344 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.161236 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" (UID: "c381b4ec-8b36-4a3d-8e07-dbbc3a021f11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.173762 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fs2vs" podUID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" containerName="ovn-controller" probeResult="failure" output=< Dec 05 07:11:10 crc kubenswrapper[4780]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Dec 05 07:11:10 crc kubenswrapper[4780]: > Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.180599 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data" (OuterVolumeSpecName: "config-data") pod "e9395104-b579-44d5-bbf0-69fe4d17406d" (UID: "e9395104-b579-44d5-bbf0-69fe4d17406d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.184490 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c4e4b4-a803-47f4-99eb-fb15b65b82b2" path="/var/lib/kubelet/pods/42c4e4b4-a803-47f4-99eb-fb15b65b82b2/volumes" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.185159 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="503f38d6-82f5-473e-9c59-2c32d8b8f855" path="/var/lib/kubelet/pods/503f38d6-82f5-473e-9c59-2c32d8b8f855/volumes" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.185698 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" path="/var/lib/kubelet/pods/5fd70346-51cf-44fc-8cea-48ee35deadb0/volumes" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.186853 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828f916b-54ac-4498-b1a7-139334944d9b" path="/var/lib/kubelet/pods/828f916b-54ac-4498-b1a7-139334944d9b/volumes" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.187515 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11a6aab-3b40-43bd-bdd6-3fc630277d49" path="/var/lib/kubelet/pods/d11a6aab-3b40-43bd-bdd6-3fc630277d49/volumes" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.188213 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e53574ab-8107-4d3d-a695-d64db3bbb908" path="/var/lib/kubelet/pods/e53574ab-8107-4d3d-a695-d64db3bbb908/volumes" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.189497 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" path="/var/lib/kubelet/pods/fee336d1-2c89-4ccb-b6ea-69a4697b7a29/volumes" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.205221 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "621ea4dd-7bc5-4404-9369-1cd99335155d" (UID: "621ea4dd-7bc5-4404-9369-1cd99335155d"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.210793 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33af7252-1228-4051-bab0-cfcaee04fe1d" (UID: "33af7252-1228-4051-bab0-cfcaee04fe1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.229429 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-config-data" (OuterVolumeSpecName: "config-data") pod "c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" (UID: "c381b4ec-8b36-4a3d-8e07-dbbc3a021f11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.230980 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.230999 4780 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.231008 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.231018 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9395104-b579-44d5-bbf0-69fe4d17406d-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.231026 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.284279 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" (UID: "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.285433 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0f8b72a-b08b-4c2f-98dc-242016b6f846" (UID: "e0f8b72a-b08b-4c2f-98dc-242016b6f846"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.301499 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.167:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.307383 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-58fb69b8bc-qmkp5" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.167:8080/healthcheck\": dial tcp 10.217.0.167:8080: i/o timeout" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.309009 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e0f8b72a-b08b-4c2f-98dc-242016b6f846" (UID: "e0f8b72a-b08b-4c2f-98dc-242016b6f846"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.316855 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "621ea4dd-7bc5-4404-9369-1cd99335155d" (UID: "621ea4dd-7bc5-4404-9369-1cd99335155d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.318432 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f5032d09-8298-4941-8b4b-0f24a57b8ced" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.332428 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.332460 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.332470 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621ea4dd-7bc5-4404-9369-1cd99335155d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.332478 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.333014 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.333070 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts podName:6b6e1d3b-503e-49c8-8d33-bcaae571525c nodeName:}" failed. No retries permitted until 2025-12-05 07:11:11.333052515 +0000 UTC m=+1505.402568847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts") pod "novacell079a0-account-delete-dfjw8" (UID: "6b6e1d3b-503e-49c8-8d33-bcaae571525c") : configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.346109 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-config-data" (OuterVolumeSpecName: "config-data") pod "2a294e09-ff41-4fcc-81f4-2a674c77c239" (UID: "2a294e09-ff41-4fcc-81f4-2a674c77c239"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.357197 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf87b821-f0c0-41df-a1ee-f2c44a09cc82" (UID: "cf87b821-f0c0-41df-a1ee-f2c44a09cc82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.397296 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" containerName="galera" containerID="cri-o://ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e" gracePeriod=29 Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.400185 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a294e09-ff41-4fcc-81f4-2a674c77c239" (UID: "2a294e09-ff41-4fcc-81f4-2a674c77c239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.403576 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-config-data" (OuterVolumeSpecName: "config-data") pod "33af7252-1228-4051-bab0-cfcaee04fe1d" (UID: "33af7252-1228-4051-bab0-cfcaee04fe1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.433085 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data" (OuterVolumeSpecName: "config-data") pod "cf87b821-f0c0-41df-a1ee-f2c44a09cc82" (UID: "cf87b821-f0c0-41df-a1ee-f2c44a09cc82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.434330 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.434394 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.434430 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts podName:202ef989-0cbf-4120-8621-11201cfe3d64 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:11.434410668 +0000 UTC m=+1505.503927000 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts") pod "glance4824-account-delete-4st4x" (UID: "202ef989-0cbf-4120-8621-11201cfe3d64") : configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.434467 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts podName:72765495-c470-41a5-b5a7-423025bdd6a7 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:11.434459339 +0000 UTC m=+1505.503975671 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts") pod "barbican796f-account-delete-h5ds7" (UID: "72765495-c470-41a5-b5a7-423025bdd6a7") : configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.434340 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.434500 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.434514 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.434544 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.434531 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.434555 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33af7252-1228-4051-bab0-cfcaee04fe1d-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.434637 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts podName:9c542de0-85ab-43f2-89ca-fb8a6c19e49d nodeName:}" failed. No retries permitted until 2025-12-05 07:11:11.434610633 +0000 UTC m=+1505.504127025 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts") pod "cinder1be5-account-delete-6mpnc" (UID: "9c542de0-85ab-43f2-89ca-fb8a6c19e49d") : configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.445675 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "43c681b8-252b-4d1a-8293-27528bc83ed8" (UID: "43c681b8-252b-4d1a-8293-27528bc83ed8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.447118 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43c681b8-252b-4d1a-8293-27528bc83ed8" (UID: "43c681b8-252b-4d1a-8293-27528bc83ed8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.451430 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-config-data" (OuterVolumeSpecName: "config-data") pod "43c681b8-252b-4d1a-8293-27528bc83ed8" (UID: "43c681b8-252b-4d1a-8293-27528bc83ed8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.476978 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-config-data" (OuterVolumeSpecName: "config-data") pod "e0f8b72a-b08b-4c2f-98dc-242016b6f846" (UID: "e0f8b72a-b08b-4c2f-98dc-242016b6f846"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.478888 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e0f8b72a-b08b-4c2f-98dc-242016b6f846" (UID: "e0f8b72a-b08b-4c2f-98dc-242016b6f846"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.503717 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-config-data" (OuterVolumeSpecName: "config-data") pod "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" (UID: "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.535982 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.536018 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.536033 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8b72a-b08b-4c2f-98dc-242016b6f846-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.536047 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.536060 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.536073 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c681b8-252b-4d1a-8293-27528bc83ed8-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.536093 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.536219 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts podName:574be54a-bbce-4f37-93b1-c9de6f1d0f4e nodeName:}" failed. No retries permitted until 2025-12-05 07:11:11.536196363 +0000 UTC m=+1505.605712695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts") pod "neutrona927-account-delete-5chq6" (UID: "574be54a-bbce-4f37-93b1-c9de6f1d0f4e") : configmap "openstack-scripts" not found Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.540009 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cf87b821-f0c0-41df-a1ee-f2c44a09cc82" (UID: "cf87b821-f0c0-41df-a1ee-f2c44a09cc82"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.540330 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.202:3000/\": dial tcp 10.217.0.202:3000: connect: connection refused" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.560485 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" (UID: "c381b4ec-8b36-4a3d-8e07-dbbc3a021f11"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.576201 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cf87b821-f0c0-41df-a1ee-f2c44a09cc82" (UID: "cf87b821-f0c0-41df-a1ee-f2c44a09cc82"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.579189 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2a294e09-ff41-4fcc-81f4-2a674c77c239" (UID: "2a294e09-ff41-4fcc-81f4-2a674c77c239"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.602407 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" (UID: "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.639108 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.639139 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a294e09-ff41-4fcc-81f4-2a674c77c239-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.639151 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf87b821-f0c0-41df-a1ee-f2c44a09cc82-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.639160 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.639171 4780 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.665775 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" (UID: "d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:10 crc kubenswrapper[4780]: E1205 07:11:10.718907 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e6efd4f_660c_44e1_bf69_8b1cec6a6e85.slice/crio-530ca0e5e1babec81694e9c8b93e6ef4428014a40ae28b2534d753e93c90ee0d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c542de0_85ab_43f2_89ca_fb8a6c19e49d.slice/crio-39bdef87adcda6005206d39eb2fa9b742fb09f571d1b28131f494c011a73a518.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b6e1d3b_503e_49c8_8d33_bcaae571525c.slice/crio-conmon-d31b368abd1529e65ca57a055a89980dc4162f66182c47922e838d0205e61994.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c32a219_7b72_4302_8cc4_b9f11a672e8d.slice/crio-ba48e227be37eb4042698d7ca1593b87f7775504552ded5d9b72f3ab116ccd77.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b6e1d3b_503e_49c8_8d33_bcaae571525c.slice/crio-d31b368abd1529e65ca57a055a89980dc4162f66182c47922e838d0205e61994.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e6efd4f_660c_44e1_bf69_8b1cec6a6e85.slice/crio-conmon-530ca0e5e1babec81694e9c8b93e6ef4428014a40ae28b2534d753e93c90ee0d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c542de0_85ab_43f2_89ca_fb8a6c19e49d.slice/crio-conmon-39bdef87adcda6005206d39eb2fa9b742fb09f571d1b28131f494c011a73a518.scope\": RecentStats: unable to find data in memory cache]" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.740451 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.882319 4780 generic.go:334] "Generic (PLEG): container finished" podID="52234708-ef2b-40c7-af1b-61e1890dd674" containerID="f26a658dbc0f16fa4268a778cc2e72e57c54ffefa2863c5a3f9e4202f590a60b" exitCode=1 Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.885915 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapiebdd-account-delete-2px9p" secret="" err="secret \"galera-openstack-dockercfg-zv8sw\" not found" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.886174 4780 scope.go:117] "RemoveContainer" containerID="f26a658dbc0f16fa4268a778cc2e72e57c54ffefa2863c5a3f9e4202f590a60b" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.900598 4780 generic.go:334] "Generic (PLEG): container finished" podID="6b6e1d3b-503e-49c8-8d33-bcaae571525c" containerID="d31b368abd1529e65ca57a055a89980dc4162f66182c47922e838d0205e61994" exitCode=1 Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.901190 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell079a0-account-delete-dfjw8" secret="" err="secret \"galera-openstack-dockercfg-zv8sw\" not found" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.901227 4780 scope.go:117] "RemoveContainer" containerID="d31b368abd1529e65ca57a055a89980dc4162f66182c47922e838d0205e61994" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.913350 4780 generic.go:334] "Generic (PLEG): container finished" podID="f5032d09-8298-4941-8b4b-0f24a57b8ced" containerID="5450b625e2fd6628a65ff330106c052fa609d51529eba7c91d50eb2a2c2bfed0" exitCode=0 Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.919753 4780 generic.go:334] "Generic (PLEG): container finished" podID="7c32a219-7b72-4302-8cc4-b9f11a672e8d" containerID="ba48e227be37eb4042698d7ca1593b87f7775504552ded5d9b72f3ab116ccd77" exitCode=0 Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.924576 4780 generic.go:334] "Generic (PLEG): container finished" podID="9c542de0-85ab-43f2-89ca-fb8a6c19e49d" containerID="39bdef87adcda6005206d39eb2fa9b742fb09f571d1b28131f494c011a73a518" exitCode=1 Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.925447 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cinder1be5-account-delete-6mpnc" secret="" err="secret \"galera-openstack-dockercfg-zv8sw\" not found" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.925499 4780 scope.go:117] "RemoveContainer" containerID="39bdef87adcda6005206d39eb2fa9b742fb09f571d1b28131f494c011a73a518" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.941079 4780 generic.go:334] "Generic (PLEG): container finished" podID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" containerID="530ca0e5e1babec81694e9c8b93e6ef4428014a40ae28b2534d753e93c90ee0d" exitCode=0 Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.945467 4780 generic.go:334] "Generic (PLEG): container finished" podID="574be54a-bbce-4f37-93b1-c9de6f1d0f4e" containerID="530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465" exitCode=1 Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.951674 4780 generic.go:334] "Generic (PLEG): container finished" podID="202ef989-0cbf-4120-8621-11201cfe3d64" containerID="6733da8d639a7996b7e7eb99726131bb8bf0c04d0e15d210e53fa012587cc24c" exitCode=1 Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.953084 4780 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glance4824-account-delete-4st4x" secret="" err="secret \"galera-openstack-dockercfg-zv8sw\" not found" Dec 05 07:11:10 crc kubenswrapper[4780]: I1205 07:11:10.953135 4780 scope.go:117] "RemoveContainer" containerID="6733da8d639a7996b7e7eb99726131bb8bf0c04d0e15d210e53fa012587cc24c" Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.050714 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.050772 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts podName:52234708-ef2b-40c7-af1b-61e1890dd674 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:11.550756721 +0000 UTC m=+1505.620273053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts") pod "novaapiebdd-account-delete-2px9p" (UID: "52234708-ef2b-40c7-af1b-61e1890dd674") : configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.151268 4780 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.151339 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data podName:f5032d09-8298-4941-8b4b-0f24a57b8ced nodeName:}" failed. No retries permitted until 2025-12-05 07:11:19.151323074 +0000 UTC m=+1513.220839406 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data") pod "rabbitmq-server-0" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced") : configmap "rabbitmq-config-data" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.151732 4780 configmap.go:193] Couldn't get configMap openstack/ovncontroller-scripts: configmap "ovncontroller-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.151788 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts podName:52793d91-2b27-4926-9293-78f555401415 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:19.151775985 +0000 UTC m=+1513.221292317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts") pod "ovn-controller-ovs-lq2sf" (UID: "52793d91-2b27-4926-9293-78f555401415") : configmap "ovncontroller-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.356182 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.356254 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts podName:6b6e1d3b-503e-49c8-8d33-bcaae571525c nodeName:}" failed. No retries permitted until 2025-12-05 07:11:13.35623817 +0000 UTC m=+1507.425754502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts") pod "novacell079a0-account-delete-dfjw8" (UID: "6b6e1d3b-503e-49c8-8d33-bcaae571525c") : configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.381746 4780 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.241s" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.382696 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapiebdd-account-delete-2px9p" event={"ID":"52234708-ef2b-40c7-af1b-61e1890dd674","Type":"ContainerDied","Data":"f26a658dbc0f16fa4268a778cc2e72e57c54ffefa2863c5a3f9e4202f590a60b"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384631 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell079a0-account-delete-dfjw8" event={"ID":"6b6e1d3b-503e-49c8-8d33-bcaae571525c","Type":"ContainerDied","Data":"d31b368abd1529e65ca57a055a89980dc4162f66182c47922e838d0205e61994"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384659 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5032d09-8298-4941-8b4b-0f24a57b8ced","Type":"ContainerDied","Data":"5450b625e2fd6628a65ff330106c052fa609d51529eba7c91d50eb2a2c2bfed0"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384742 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7c32a219-7b72-4302-8cc4-b9f11a672e8d","Type":"ContainerDied","Data":"ba48e227be37eb4042698d7ca1593b87f7775504552ded5d9b72f3ab116ccd77"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384761 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7c32a219-7b72-4302-8cc4-b9f11a672e8d","Type":"ContainerDied","Data":"c77d4e84f7126a327e580a287c27add1511a08fcf8cb3e6afb03dac9e751e6ff"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384773 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c77d4e84f7126a327e580a287c27add1511a08fcf8cb3e6afb03dac9e751e6ff" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384793 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"29f97591-4528-4ed0-918c-b6de191c452a","Type":"ContainerDied","Data":"6976e344600e3cc59e482468d2b47f25d2347e7649562a0cb8ec9cdcc8e64e28"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384807 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6976e344600e3cc59e482468d2b47f25d2347e7649562a0cb8ec9cdcc8e64e28" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384817 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder1be5-account-delete-6mpnc" event={"ID":"9c542de0-85ab-43f2-89ca-fb8a6c19e49d","Type":"ContainerDied","Data":"39bdef87adcda6005206d39eb2fa9b742fb09f571d1b28131f494c011a73a518"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384830 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85","Type":"ContainerDied","Data":"530ca0e5e1babec81694e9c8b93e6ef4428014a40ae28b2534d753e93c90ee0d"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384844 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85","Type":"ContainerDied","Data":"29e579b44d96c57574bd38699ee2194136d3ab35d0b7c03c2608a355ee26cf23"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384853 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e579b44d96c57574bd38699ee2194136d3ab35d0b7c03c2608a355ee26cf23" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384863 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutrona927-account-delete-5chq6" event={"ID":"574be54a-bbce-4f37-93b1-c9de6f1d0f4e","Type":"ContainerDied","Data":"530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.384896 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance4824-account-delete-4st4x" event={"ID":"202ef989-0cbf-4120-8621-11201cfe3d64","Type":"ContainerDied","Data":"6733da8d639a7996b7e7eb99726131bb8bf0c04d0e15d210e53fa012587cc24c"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.388901 4780 scope.go:117] "RemoveContainer" containerID="91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.443386 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.461477 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.461828 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts podName:202ef989-0cbf-4120-8621-11201cfe3d64 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:13.461811108 +0000 UTC m=+1507.531327440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts") pod "glance4824-account-delete-4st4x" (UID: "202ef989-0cbf-4120-8621-11201cfe3d64") : configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.462041 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.462102 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts podName:9c542de0-85ab-43f2-89ca-fb8a6c19e49d nodeName:}" failed. No retries permitted until 2025-12-05 07:11:13.462078325 +0000 UTC m=+1507.531594657 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts") pod "cinder1be5-account-delete-6mpnc" (UID: "9c542de0-85ab-43f2-89ca-fb8a6c19e49d") : configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.462125 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.462145 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts podName:72765495-c470-41a5-b5a7-423025bdd6a7 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:13.462139767 +0000 UTC m=+1507.531656099 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts") pod "barbican796f-account-delete-h5ds7" (UID: "72765495-c470-41a5-b5a7-423025bdd6a7") : configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.524231 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-s4cxl"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.566569 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89pf4\" (UniqueName: \"kubernetes.io/projected/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-kube-api-access-89pf4\") pod \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.566624 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-logs\") pod \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.566735 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-internal-tls-certs\") pod \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.566949 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data-custom\") pod \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.567001 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-combined-ca-bundle\") pod \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.567055 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-public-tls-certs\") pod \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.567091 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data\") pod \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\" (UID: \"a6b8df94-a979-4c1a-bffd-5f5052f0ad12\") " Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.568296 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.568370 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts podName:574be54a-bbce-4f37-93b1-c9de6f1d0f4e nodeName:}" failed. No retries permitted until 2025-12-05 07:11:13.568348482 +0000 UTC m=+1507.637864814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts") pod "neutrona927-account-delete-5chq6" (UID: "574be54a-bbce-4f37-93b1-c9de6f1d0f4e") : configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.571353 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-s4cxl"] Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.571527 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.571601 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts podName:52234708-ef2b-40c7-af1b-61e1890dd674 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:12.571577289 +0000 UTC m=+1506.641093621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts") pod "novaapiebdd-account-delete-2px9p" (UID: "52234708-ef2b-40c7-af1b-61e1890dd674") : configmap "openstack-scripts" not found Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.572227 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-logs" (OuterVolumeSpecName: "logs") pod "a6b8df94-a979-4c1a-bffd-5f5052f0ad12" (UID: "a6b8df94-a979-4c1a-bffd-5f5052f0ad12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.579170 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-kube-api-access-89pf4" (OuterVolumeSpecName: "kube-api-access-89pf4") pod "a6b8df94-a979-4c1a-bffd-5f5052f0ad12" (UID: "a6b8df94-a979-4c1a-bffd-5f5052f0ad12"). InnerVolumeSpecName "kube-api-access-89pf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.579963 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a6b8df94-a979-4c1a-bffd-5f5052f0ad12" (UID: "a6b8df94-a979-4c1a-bffd-5f5052f0ad12"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.612128 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6b8df94-a979-4c1a-bffd-5f5052f0ad12" (UID: "a6b8df94-a979-4c1a-bffd-5f5052f0ad12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.622520 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance4824-account-delete-4st4x"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.628546 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4824-account-create-update-m7ss5"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.635513 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4824-account-create-update-m7ss5"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.677695 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.677726 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.677735 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89pf4\" (UniqueName: \"kubernetes.io/projected/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-kube-api-access-89pf4\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.677747 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.679110 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a6b8df94-a979-4c1a-bffd-5f5052f0ad12" (UID: "a6b8df94-a979-4c1a-bffd-5f5052f0ad12"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.695488 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vv8kw"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.704925 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vv8kw"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.705204 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a6b8df94-a979-4c1a-bffd-5f5052f0ad12" (UID: "a6b8df94-a979-4c1a-bffd-5f5052f0ad12"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.712219 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutrona927-account-delete-5chq6"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.714847 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data" (OuterVolumeSpecName: "config-data") pod "a6b8df94-a979-4c1a-bffd-5f5052f0ad12" (UID: "a6b8df94-a979-4c1a-bffd-5f5052f0ad12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.721814 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a927-account-create-update-dmw95"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.734795 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a927-account-create-update-dmw95"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.780075 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.780106 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.780116 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6b8df94-a979-4c1a-bffd-5f5052f0ad12-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.933205 4780 scope.go:117] "RemoveContainer" containerID="5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2" Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.935297 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2\": container with ID starting with 5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2 not found: ID does not exist" containerID="5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.935327 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2"} err="failed to get container status \"5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2\": rpc error: code = NotFound desc = could not find container \"5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2\": container with ID starting with 5411f6c51fab8022117d8cf1a502d504176bc62865e4ce6364df807d84fa80a2 not found: ID does not exist" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.935351 4780 scope.go:117] "RemoveContainer" containerID="91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124" Dec 05 07:11:11 crc kubenswrapper[4780]: E1205 07:11:11.935666 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124\": container with ID starting with 91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124 not found: ID does not exist" containerID="91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.935682 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124"} err="failed to get container status \"91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124\": rpc error: code = NotFound desc = could not find container \"91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124\": container with ID starting with 91f49e033bb4c719b75d6876a9a3bdbd9c7c76bdff22cbe076121349149a9124 not found: ID does not exist" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.935694 4780 scope.go:117] "RemoveContainer" containerID="64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.966188 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-qqtqb"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.982583 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ffce971d-fa60-450d-a347-29ba2a9c9c84/ovn-northd/0.log" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.982665 4780 generic.go:334] "Generic (PLEG): container finished" podID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerID="70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294" exitCode=139 Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.982767 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffce971d-fa60-450d-a347-29ba2a9c9c84","Type":"ContainerDied","Data":"70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.982803 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffce971d-fa60-450d-a347-29ba2a9c9c84","Type":"ContainerDied","Data":"85c3e6874d1d012841c7662faeef8af21be92898cb0b9d2fff3bbfd0059b2c6c"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.982816 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c3e6874d1d012841c7662faeef8af21be92898cb0b9d2fff3bbfd0059b2c6c" Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.987868 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-qqtqb"] Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.990511 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f5032d09-8298-4941-8b4b-0f24a57b8ced","Type":"ContainerDied","Data":"c3a5ea690980139eefba2d05b84901a873815d1ba48bd42651718da8ce477c33"} Dec 05 07:11:11 crc kubenswrapper[4780]: I1205 07:11:11.990555 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a5ea690980139eefba2d05b84901a873815d1ba48bd42651718da8ce477c33" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.007945 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-799c48f5f4-sm7kz" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.025327 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.055035 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican796f-account-delete-h5ds7"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.068500 4780 scope.go:117] "RemoveContainer" containerID="4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.079191 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-796f-account-create-update-dc2hl"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.083855 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qslrx\" (UniqueName: \"kubernetes.io/projected/8d9c218c-8cf4-468d-a946-bb14fc0024b0-kube-api-access-qslrx\") pod \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.084006 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-combined-ca-bundle\") pod \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.084156 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d9c218c-8cf4-468d-a946-bb14fc0024b0-logs\") pod \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.084264 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data-custom\") pod \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.084279 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data\") pod \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\" (UID: \"8d9c218c-8cf4-468d-a946-bb14fc0024b0\") " Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.085589 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d9c218c-8cf4-468d-a946-bb14fc0024b0-logs" (OuterVolumeSpecName: "logs") pod "8d9c218c-8cf4-468d-a946-bb14fc0024b0" (UID: "8d9c218c-8cf4-468d-a946-bb14fc0024b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.098382 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-796f-account-create-update-dc2hl"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.113171 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9c218c-8cf4-468d-a946-bb14fc0024b0-kube-api-access-qslrx" (OuterVolumeSpecName: "kube-api-access-qslrx") pod "8d9c218c-8cf4-468d-a946-bb14fc0024b0" (UID: "8d9c218c-8cf4-468d-a946-bb14fc0024b0"). InnerVolumeSpecName "kube-api-access-qslrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.129224 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-lmg7j"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.135342 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-lmg7j"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.138046 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8d9c218c-8cf4-468d-a946-bb14fc0024b0" (UID: "8d9c218c-8cf4-468d-a946-bb14fc0024b0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.140290 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d9c218c-8cf4-468d-a946-bb14fc0024b0" (UID: "8d9c218c-8cf4-468d-a946-bb14fc0024b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.151337 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29cd460d-e210-4a2d-9199-ecf32fbd3fb6" path="/var/lib/kubelet/pods/29cd460d-e210-4a2d-9199-ecf32fbd3fb6/volumes" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.152089 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54659268-947f-4e6d-8b41-3fc32e830ed3" path="/var/lib/kubelet/pods/54659268-947f-4e6d-8b41-3fc32e830ed3/volumes" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.152837 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9221549a-ed1e-4bfc-8bf7-6ecaec0c2069" path="/var/lib/kubelet/pods/9221549a-ed1e-4bfc-8bf7-6ecaec0c2069/volumes" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.153521 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a5cb7e-5c29-4b34-9260-436b933dc431" path="/var/lib/kubelet/pods/a7a5cb7e-5c29-4b34-9260-436b933dc431/volumes" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.154868 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a" path="/var/lib/kubelet/pods/c6ae3ef0-0a62-4f04-aa97-1f167e0c5f3a/volumes" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.155533 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a6b6eb-0713-42a6-ad08-582ea6d835cb" path="/var/lib/kubelet/pods/d6a6b6eb-0713-42a6-ad08-582ea6d835cb/volumes" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.156302 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28dc679-aa81-426b-b4cb-cc6c25c37791" path="/var/lib/kubelet/pods/e28dc679-aa81-426b-b4cb-cc6c25c37791/volumes" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.180585 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data" (OuterVolumeSpecName: "config-data") pod "8d9c218c-8cf4-468d-a946-bb14fc0024b0" (UID: "8d9c218c-8cf4-468d-a946-bb14fc0024b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.186440 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qslrx\" (UniqueName: \"kubernetes.io/projected/8d9c218c-8cf4-468d-a946-bb14fc0024b0-kube-api-access-qslrx\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.186467 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.186476 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d9c218c-8cf4-468d-a946-bb14fc0024b0-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.186486 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.186497 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9c218c-8cf4-468d-a946-bb14fc0024b0-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.217289 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.228939 4780 scope.go:117] "RemoveContainer" containerID="64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb" Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.229257 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb\": container with ID starting with 64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb not found: ID does not exist" containerID="64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.229292 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb"} err="failed to get container status \"64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb\": rpc error: code = NotFound desc = could not find container \"64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb\": container with ID starting with 64e9a796a6a1c9b35712331d801ba9536334f08c0222d4f53831eb461fd41dfb not found: ID does not exist" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.229312 4780 scope.go:117] "RemoveContainer" containerID="4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14" Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.229468 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14\": container with ID starting with 4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14 not found: ID does not exist" containerID="4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.229488 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14"} err="failed to get container status \"4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14\": rpc error: code = NotFound desc = could not find container \"4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14\": container with ID starting with 4f49f3986e9c2af51b1bcc7f8ffd684aab21bbc800c72e78a4125b257b714d14 not found: ID does not exist" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.229502 4780 scope.go:117] "RemoveContainer" containerID="6d62e69774c5587c8e04a48087fe8984cb21a4165f3b57401aaae1dcddc7f33a" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.254397 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a463-account-create-update-82ctr"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.254435 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementa463-account-delete-wsgnm"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.254456 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a463-account-create-update-82ctr"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.300413 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-combined-ca-bundle\") pod \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.300486 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8pj5\" (UniqueName: \"kubernetes.io/projected/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-kube-api-access-j8pj5\") pod \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.300637 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data\") pod \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.300693 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-logs\") pod \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.300730 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data-custom\") pod \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\" (UID: \"aa86c0d1-d6cb-4566-b4b3-352c690b0a96\") " Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.319456 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-logs" (OuterVolumeSpecName: "logs") pod "aa86c0d1-d6cb-4566-b4b3-352c690b0a96" (UID: "aa86c0d1-d6cb-4566-b4b3-352c690b0a96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.319089 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aa86c0d1-d6cb-4566-b4b3-352c690b0a96" (UID: "aa86c0d1-d6cb-4566-b4b3-352c690b0a96"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.320250 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-kube-api-access-j8pj5" (OuterVolumeSpecName: "kube-api-access-j8pj5") pod "aa86c0d1-d6cb-4566-b4b3-352c690b0a96" (UID: "aa86c0d1-d6cb-4566-b4b3-352c690b0a96"). InnerVolumeSpecName "kube-api-access-j8pj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.352051 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa86c0d1-d6cb-4566-b4b3-352c690b0a96" (UID: "aa86c0d1-d6cb-4566-b4b3-352c690b0a96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.378039 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a is running failed: container process not found" containerID="d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.378399 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a is running failed: container process not found" containerID="d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.378629 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a is running failed: container process not found" containerID="d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.378656 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="29f97591-4528-4ed0-918c-b6de191c452a" containerName="nova-cell0-conductor-conductor" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.402372 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.402405 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8pj5\" (UniqueName: \"kubernetes.io/projected/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-kube-api-access-j8pj5\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.402414 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-logs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.402422 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.485047 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data" (OuterVolumeSpecName: "config-data") pod "aa86c0d1-d6cb-4566-b4b3-352c690b0a96" (UID: "aa86c0d1-d6cb-4566-b4b3-352c690b0a96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.503956 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa86c0d1-d6cb-4566-b4b3-352c690b0a96-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.537026 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wdgqc"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.558894 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wdgqc"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.582498 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder1be5-account-delete-6mpnc"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.582568 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1be5-account-create-update-bj4nb"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.588986 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1be5-account-create-update-bj4nb"] Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.605443 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.605522 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts podName:52234708-ef2b-40c7-af1b-61e1890dd674 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:14.605501306 +0000 UTC m=+1508.675017628 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts") pod "novaapiebdd-account-delete-2px9p" (UID: "52234708-ef2b-40c7-af1b-61e1890dd674") : configmap "openstack-scripts" not found Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.636948 4780 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Dec 05 07:11:12 crc kubenswrapper[4780]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-12-05T07:11:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 05 07:11:12 crc kubenswrapper[4780]: /etc/init.d/functions: line 589: 456 Alarm clock "$@" Dec 05 07:11:12 crc kubenswrapper[4780]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-fs2vs" message=< Dec 05 07:11:12 crc kubenswrapper[4780]: Exiting ovn-controller (1) [FAILED] Dec 05 07:11:12 crc kubenswrapper[4780]: Killing ovn-controller (1) [ OK ] Dec 05 07:11:12 crc kubenswrapper[4780]: Killing ovn-controller (1) with SIGKILL [ OK ] Dec 05 07:11:12 crc kubenswrapper[4780]: 2025-12-05T07:11:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 05 07:11:12 crc kubenswrapper[4780]: /etc/init.d/functions: line 589: 456 Alarm clock "$@" Dec 05 07:11:12 crc kubenswrapper[4780]: > Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.636989 4780 kuberuntime_container.go:691] "PreStop hook failed" err=< Dec 05 07:11:12 crc kubenswrapper[4780]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-12-05T07:11:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Dec 05 07:11:12 crc kubenswrapper[4780]: /etc/init.d/functions: line 589: 456 Alarm clock "$@" Dec 05 07:11:12 crc kubenswrapper[4780]: > pod="openstack/ovn-controller-fs2vs" podUID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" containerName="ovn-controller" containerID="cri-o://fc4ab1bdd9450d2793d03d7cfe2dd694a4092653e1a82515aa096c88a796d8ba" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.637042 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-fs2vs" podUID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" containerName="ovn-controller" containerID="cri-o://fc4ab1bdd9450d2793d03d7cfe2dd694a4092653e1a82515aa096c88a796d8ba" gracePeriod=21 Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.826924 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-bwb2m"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.832225 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-bwb2m"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.847606 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-79a0-account-create-update-tm7nl"] Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.905268 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e is running failed: container process not found" containerID="ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.907437 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e is running failed: container process not found" containerID="ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.911316 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e is running failed: container process not found" containerID="ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 05 07:11:12 crc kubenswrapper[4780]: E1205 07:11:12.911382 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" containerName="galera" Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.931082 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-79a0-account-create-update-tm7nl"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.950111 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell079a0-account-delete-dfjw8"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.958456 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9xcxs"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.964374 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9xcxs"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.972893 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ebdd-account-create-update-5f9cl"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.993798 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ebdd-account-create-update-5f9cl"] Dec 05 07:11:12 crc kubenswrapper[4780]: I1205 07:11:12.998488 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapiebdd-account-delete-2px9p"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.026345 4780 generic.go:334] "Generic (PLEG): container finished" podID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" containerID="ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e" exitCode=0 Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.026419 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3","Type":"ContainerDied","Data":"ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e"} Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.026446 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3","Type":"ContainerDied","Data":"f1382d25f679be228ed9947a98d27d54904acd40002add8d03cf765e4aa55222"} Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.026457 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1382d25f679be228ed9947a98d27d54904acd40002add8d03cf765e4aa55222" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.037839 4780 generic.go:334] "Generic (PLEG): container finished" podID="fb8bb2be-991d-4cb3-b3b9-9175c78019d9" containerID="e2272792c63e1f2159b6320d0d6009da818ee74ca29907f48e07464acca5482a" exitCode=0 Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.037914 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c9f9456b6-zflhk" event={"ID":"fb8bb2be-991d-4cb3-b3b9-9175c78019d9","Type":"ContainerDied","Data":"e2272792c63e1f2159b6320d0d6009da818ee74ca29907f48e07464acca5482a"} Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.040613 4780 generic.go:334] "Generic (PLEG): container finished" podID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerID="6729e14dde9f78be13ae40bdda9e3ae569261b8bcd8c18d065c49f17af80f082" exitCode=0 Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.040650 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aca675e-bb76-4588-b998-c26393dd5ab6","Type":"ContainerDied","Data":"6729e14dde9f78be13ae40bdda9e3ae569261b8bcd8c18d065c49f17af80f082"} Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.040695 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5aca675e-bb76-4588-b998-c26393dd5ab6","Type":"ContainerDied","Data":"eeee9696dd1264631c93029004f9100486f9335885b9e2868cbf7cad6b9f06ae"} Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.040709 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeee9696dd1264631c93029004f9100486f9335885b9e2868cbf7cad6b9f06ae" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.042260 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fs2vs_52ebc417-5adb-4ac6-9b5c-6f065fc4afe0/ovn-controller/0.log" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.042302 4780 generic.go:334] "Generic (PLEG): container finished" podID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" containerID="fc4ab1bdd9450d2793d03d7cfe2dd694a4092653e1a82515aa096c88a796d8ba" exitCode=137 Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.042388 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fc64465bd-vwr2q" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.042384 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fs2vs" event={"ID":"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0","Type":"ContainerDied","Data":"fc4ab1bdd9450d2793d03d7cfe2dd694a4092653e1a82515aa096c88a796d8ba"} Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.042761 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59d58fb65c-nzf5k" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.068545 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.091196 4780 scope.go:117] "RemoveContainer" containerID="fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.138407 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-certs\") pod \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.138553 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffxd7\" (UniqueName: \"kubernetes.io/projected/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-api-access-ffxd7\") pod \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.138589 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-config\") pod \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.138636 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-combined-ca-bundle\") pod \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\" (UID: \"cfe98bcd-7b01-4246-9879-15ed51cf7a1f\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.152378 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.171809 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-api-access-ffxd7" (OuterVolumeSpecName: "kube-api-access-ffxd7") pod "cfe98bcd-7b01-4246-9879-15ed51cf7a1f" (UID: "cfe98bcd-7b01-4246-9879-15ed51cf7a1f"). InnerVolumeSpecName "kube-api-access-ffxd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.193530 4780 scope.go:117] "RemoveContainer" containerID="672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.240239 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-combined-ca-bundle\") pod \"29f97591-4528-4ed0-918c-b6de191c452a\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.240294 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs9dv\" (UniqueName: \"kubernetes.io/projected/29f97591-4528-4ed0-918c-b6de191c452a-kube-api-access-qs9dv\") pod \"29f97591-4528-4ed0-918c-b6de191c452a\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.240360 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-config-data\") pod \"29f97591-4528-4ed0-918c-b6de191c452a\" (UID: \"29f97591-4528-4ed0-918c-b6de191c452a\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.240826 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffxd7\" (UniqueName: \"kubernetes.io/projected/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-api-access-ffxd7\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.269661 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f97591-4528-4ed0-918c-b6de191c452a-kube-api-access-qs9dv" (OuterVolumeSpecName: "kube-api-access-qs9dv") pod "29f97591-4528-4ed0-918c-b6de191c452a" (UID: "29f97591-4528-4ed0-918c-b6de191c452a"). InnerVolumeSpecName "kube-api-access-qs9dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.343267 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs9dv\" (UniqueName: \"kubernetes.io/projected/29f97591-4528-4ed0-918c-b6de191c452a-kube-api-access-qs9dv\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.437957 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "cfe98bcd-7b01-4246-9879-15ed51cf7a1f" (UID: "cfe98bcd-7b01-4246-9879-15ed51cf7a1f"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.444590 4780 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.444668 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.444719 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts podName:6b6e1d3b-503e-49c8-8d33-bcaae571525c nodeName:}" failed. No retries permitted until 2025-12-05 07:11:17.444702942 +0000 UTC m=+1511.514219274 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts") pod "novacell079a0-account-delete-dfjw8" (UID: "6b6e1d3b-503e-49c8-8d33-bcaae571525c") : configmap "openstack-scripts" not found Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.453133 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-config-data" (OuterVolumeSpecName: "config-data") pod "29f97591-4528-4ed0-918c-b6de191c452a" (UID: "29f97591-4528-4ed0-918c-b6de191c452a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.456281 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfe98bcd-7b01-4246-9879-15ed51cf7a1f" (UID: "cfe98bcd-7b01-4246-9879-15ed51cf7a1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.479734 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29f97591-4528-4ed0-918c-b6de191c452a" (UID: "29f97591-4528-4ed0-918c-b6de191c452a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.494252 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "cfe98bcd-7b01-4246-9879-15ed51cf7a1f" (UID: "cfe98bcd-7b01-4246-9879-15ed51cf7a1f"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.505560 4780 scope.go:117] "RemoveContainer" containerID="fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb" Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.512452 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb\": container with ID starting with fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb not found: ID does not exist" containerID="fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.512503 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb"} err="failed to get container status \"fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb\": rpc error: code = NotFound desc = could not find container \"fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb\": container with ID starting with fe118f5da833de048ae743246dd9267e066964c882f6e8c42a5dd0a59802edfb not found: ID does not exist" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.512533 4780 scope.go:117] "RemoveContainer" containerID="672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156" Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.514706 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156\": container with ID starting with 672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156 not found: ID does not exist" containerID="672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.514752 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156"} err="failed to get container status \"672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156\": rpc error: code = NotFound desc = could not find container \"672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156\": container with ID starting with 672b2b937a51f5ab9518843c2c65cee61bbf3fe79c47a438dfa121b0e351c156 not found: ID does not exist" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.514781 4780 scope.go:117] "RemoveContainer" containerID="0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.515081 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.534720 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.545832 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-799c48f5f4-sm7kz"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.552522 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.552557 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.552572 4780 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfe98bcd-7b01-4246-9879-15ed51cf7a1f-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.552584 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f97591-4528-4ed0-918c-b6de191c452a-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.552660 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.552714 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts podName:202ef989-0cbf-4120-8621-11201cfe3d64 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:17.552692735 +0000 UTC m=+1511.622209067 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts") pod "glance4824-account-delete-4st4x" (UID: "202ef989-0cbf-4120-8621-11201cfe3d64") : configmap "openstack-scripts" not found Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.553049 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.553090 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts podName:9c542de0-85ab-43f2-89ca-fb8a6c19e49d nodeName:}" failed. No retries permitted until 2025-12-05 07:11:17.553079495 +0000 UTC m=+1511.622595827 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts") pod "cinder1be5-account-delete-6mpnc" (UID: "9c542de0-85ab-43f2-89ca-fb8a6c19e49d") : configmap "openstack-scripts" not found Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.553128 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.553154 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts podName:72765495-c470-41a5-b5a7-423025bdd6a7 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:17.553146036 +0000 UTC m=+1511.622662378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts") pod "barbican796f-account-delete-h5ds7" (UID: "72765495-c470-41a5-b5a7-423025bdd6a7") : configmap "openstack-scripts" not found Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.558038 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.561537 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ffce971d-fa60-450d-a347-29ba2a9c9c84/ovn-northd/0.log" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.561632 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.579837 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-799c48f5f4-sm7kz"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.610540 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.610955 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.612031 4780 scope.go:117] "RemoveContainer" containerID="43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.612905 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fs2vs_52ebc417-5adb-4ac6-9b5c-6f065fc4afe0/ovn-controller/0.log" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.612949 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fs2vs" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.628903 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-669bccb86b-8cjsq"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.637732 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-669bccb86b-8cjsq"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.648007 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.653807 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.653958 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-pod-info\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654014 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-tls\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654039 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-plugins-conf\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654073 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-tls\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654115 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-plugins-conf\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654141 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654164 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-rundir\") pod \"ffce971d-fa60-450d-a347-29ba2a9c9c84\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654190 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-server-conf\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654214 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-northd-tls-certs\") pod \"ffce971d-fa60-450d-a347-29ba2a9c9c84\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654246 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4k6p\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-kube-api-access-w4k6p\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654275 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5032d09-8298-4941-8b4b-0f24a57b8ced-erlang-cookie-secret\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654313 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-erlang-cookie-secret\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654351 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654383 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t4xq\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-kube-api-access-4t4xq\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654414 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krgbp\" (UniqueName: \"kubernetes.io/projected/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kube-api-access-krgbp\") pod \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654440 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654461 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-erlang-cookie\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654479 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-server-conf\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654523 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-metrics-certs-tls-certs\") pod \"ffce971d-fa60-450d-a347-29ba2a9c9c84\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654543 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-confd\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654558 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-config\") pod \"ffce971d-fa60-450d-a347-29ba2a9c9c84\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654577 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-combined-ca-bundle\") pod \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654598 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-plugins\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654616 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kolla-config\") pod \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654641 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-memcached-tls-certs\") pod \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654672 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcmn5\" (UniqueName: \"kubernetes.io/projected/ffce971d-fa60-450d-a347-29ba2a9c9c84-kube-api-access-vcmn5\") pod \"ffce971d-fa60-450d-a347-29ba2a9c9c84\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654691 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-scripts\") pod \"ffce971d-fa60-450d-a347-29ba2a9c9c84\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654715 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-confd\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654733 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-plugins\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654756 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-erlang-cookie\") pod \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\" (UID: \"1e6efd4f-660c-44e1-bf69-8b1cec6a6e85\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654777 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5032d09-8298-4941-8b4b-0f24a57b8ced-pod-info\") pod \"f5032d09-8298-4941-8b4b-0f24a57b8ced\" (UID: \"f5032d09-8298-4941-8b4b-0f24a57b8ced\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654797 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-combined-ca-bundle\") pod \"ffce971d-fa60-450d-a347-29ba2a9c9c84\" (UID: \"ffce971d-fa60-450d-a347-29ba2a9c9c84\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.654815 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-config-data\") pod \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\" (UID: \"7c32a219-7b72-4302-8cc4-b9f11a672e8d\") " Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.655599 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.655655 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts podName:574be54a-bbce-4f37-93b1-c9de6f1d0f4e nodeName:}" failed. No retries permitted until 2025-12-05 07:11:17.655640412 +0000 UTC m=+1511.725156744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts") pod "neutrona927-account-delete-5chq6" (UID: "574be54a-bbce-4f37-93b1-c9de6f1d0f4e") : configmap "openstack-scripts" not found Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.659780 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.660672 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.660815 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-config" (OuterVolumeSpecName: "config") pod "ffce971d-fa60-450d-a347-29ba2a9c9c84" (UID: "ffce971d-fa60-450d-a347-29ba2a9c9c84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.661466 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.662798 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.663307 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.664048 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-scripts" (OuterVolumeSpecName: "scripts") pod "ffce971d-fa60-450d-a347-29ba2a9c9c84" (UID: "ffce971d-fa60-450d-a347-29ba2a9c9c84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.664385 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.666042 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "7c32a219-7b72-4302-8cc4-b9f11a672e8d" (UID: "7c32a219-7b72-4302-8cc4-b9f11a672e8d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.666556 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-config-data" (OuterVolumeSpecName: "config-data") pod "7c32a219-7b72-4302-8cc4-b9f11a672e8d" (UID: "7c32a219-7b72-4302-8cc4-b9f11a672e8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.699001 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.699067 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.699085 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.700851 4780 scope.go:117] "RemoveContainer" containerID="0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e" Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.701897 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e\": container with ID starting with 0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e not found: ID does not exist" containerID="0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.701953 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e"} err="failed to get container status \"0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e\": rpc error: code = NotFound desc = could not find container \"0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e\": container with ID starting with 0a33b5de033e96730460b020462d51e2ed0ed6469456ea131ed165c21a2c194e not found: ID does not exist" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.701992 4780 scope.go:117] "RemoveContainer" containerID="43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.702556 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.702811 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.702912 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7\": container with ID starting with 43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7 not found: ID does not exist" containerID="43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.702938 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7"} err="failed to get container status \"43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7\": rpc error: code = NotFound desc = could not find container \"43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7\": container with ID starting with 43b22b2393bf8c4befcfb7f25d629c4eeeefa20246d12170f30f930c8a85b0e7 not found: ID does not exist" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.702957 4780 scope.go:117] "RemoveContainer" containerID="e6a45803166a8897d2c397830c29bf665c77e0b2be9e71bc5fbb3673e9a36a20" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.703099 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "ffce971d-fa60-450d-a347-29ba2a9c9c84" (UID: "ffce971d-fa60-450d-a347-29ba2a9c9c84"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.706648 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-kube-api-access-4t4xq" (OuterVolumeSpecName: "kube-api-access-4t4xq") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "kube-api-access-4t4xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.713779 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5032d09-8298-4941-8b4b-0f24a57b8ced-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.716917 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.717017 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.720509 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.721623 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f5032d09-8298-4941-8b4b-0f24a57b8ced-pod-info" (OuterVolumeSpecName: "pod-info") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.721638 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kube-api-access-krgbp" (OuterVolumeSpecName: "kube-api-access-krgbp") pod "7c32a219-7b72-4302-8cc4-b9f11a672e8d" (UID: "7c32a219-7b72-4302-8cc4-b9f11a672e8d"). InnerVolumeSpecName "kube-api-access-krgbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.722125 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.722277 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffce971d-fa60-450d-a347-29ba2a9c9c84-kube-api-access-vcmn5" (OuterVolumeSpecName: "kube-api-access-vcmn5") pod "ffce971d-fa60-450d-a347-29ba2a9c9c84" (UID: "ffce971d-fa60-450d-a347-29ba2a9c9c84"). InnerVolumeSpecName "kube-api-access-vcmn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.729189 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.755751 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-scripts\") pod \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.756016 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-default\") pod \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.756125 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-fernet-keys\") pod \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.756219 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-log-httpd\") pod \"5aca675e-bb76-4588-b998-c26393dd5ab6\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.756394 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-combined-ca-bundle\") pod \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.756492 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-sg-core-conf-yaml\") pod \"5aca675e-bb76-4588-b998-c26393dd5ab6\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.756597 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-generated\") pod \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.756685 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run\") pod \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.756772 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-config-data\") pod \"5aca675e-bb76-4588-b998-c26393dd5ab6\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.756897 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xcsm\" (UniqueName: \"kubernetes.io/projected/5aca675e-bb76-4588-b998-c26393dd5ab6-kube-api-access-7xcsm\") pod \"5aca675e-bb76-4588-b998-c26393dd5ab6\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757012 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzfgz\" (UniqueName: \"kubernetes.io/projected/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-kube-api-access-nzfgz\") pod \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757109 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-galera-tls-certs\") pod \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757214 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-config-data\") pod \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757302 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757402 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-internal-tls-certs\") pod \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757501 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kolla-config\") pod \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757612 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-ceilometer-tls-certs\") pod \"5aca675e-bb76-4588-b998-c26393dd5ab6\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757759 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-credential-keys\") pod \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757852 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l45kr\" (UniqueName: \"kubernetes.io/projected/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kube-api-access-l45kr\") pod \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757963 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-public-tls-certs\") pod \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.758050 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-operator-scripts\") pod \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\" (UID: \"885ecc9e-e70a-4d6e-ab6b-f82e46be61a3\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.758136 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-combined-ca-bundle\") pod \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.758236 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-run-httpd\") pod \"5aca675e-bb76-4588-b998-c26393dd5ab6\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.758317 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-combined-ca-bundle\") pod \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.758412 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-combined-ca-bundle\") pod \"5aca675e-bb76-4588-b998-c26393dd5ab6\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.758523 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-ovn-controller-tls-certs\") pod \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.758614 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-log-ovn\") pod \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.758711 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run-ovn\") pod \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\" (UID: \"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.758792 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-scripts\") pod \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.758912 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-scripts\") pod \"5aca675e-bb76-4588-b998-c26393dd5ab6\" (UID: \"5aca675e-bb76-4588-b998-c26393dd5ab6\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.759020 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvp2h\" (UniqueName: \"kubernetes.io/projected/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-kube-api-access-kvp2h\") pod \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\" (UID: \"fb8bb2be-991d-4cb3-b3b9-9175c78019d9\") " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.759723 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.759832 4780 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.759976 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760051 4780 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760124 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760203 4780 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5032d09-8298-4941-8b4b-0f24a57b8ced-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760280 4780 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760469 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760547 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t4xq\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-kube-api-access-4t4xq\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760619 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krgbp\" (UniqueName: \"kubernetes.io/projected/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kube-api-access-krgbp\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760712 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760784 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760853 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760951 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.761027 4780 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.761097 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcmn5\" (UniqueName: \"kubernetes.io/projected/ffce971d-fa60-450d-a347-29ba2a9c9c84-kube-api-access-vcmn5\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.761166 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffce971d-fa60-450d-a347-29ba2a9c9c84-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.761237 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.761315 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.761393 4780 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5032d09-8298-4941-8b4b-0f24a57b8ced-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.761470 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c32a219-7b72-4302-8cc4-b9f11a672e8d-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.757800 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-scripts" (OuterVolumeSpecName: "scripts") pod "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" (UID: "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.760215 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" (UID: "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.761786 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5aca675e-bb76-4588-b998-c26393dd5ab6" (UID: "5aca675e-bb76-4588-b998-c26393dd5ab6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.761835 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run" (OuterVolumeSpecName: "var-run") pod "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" (UID: "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.772889 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-pod-info" (OuterVolumeSpecName: "pod-info") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.773774 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fb8bb2be-991d-4cb3-b3b9-9175c78019d9" (UID: "fb8bb2be-991d-4cb3-b3b9-9175c78019d9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.776039 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aca675e-bb76-4588-b998-c26393dd5ab6-kube-api-access-7xcsm" (OuterVolumeSpecName: "kube-api-access-7xcsm") pod "5aca675e-bb76-4588-b998-c26393dd5ab6" (UID: "5aca675e-bb76-4588-b998-c26393dd5ab6"). InnerVolumeSpecName "kube-api-access-7xcsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.778632 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-scripts" (OuterVolumeSpecName: "scripts") pod "fb8bb2be-991d-4cb3-b3b9-9175c78019d9" (UID: "fb8bb2be-991d-4cb3-b3b9-9175c78019d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.778766 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" (UID: "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.778864 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" (UID: "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.779075 4780 scope.go:117] "RemoveContainer" containerID="d68e53a20f7b0772cf31f43fd6387417cf438c45cf97337ca3c20b74894ceb64" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.779356 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.781276 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" (UID: "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.784035 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fb8bb2be-991d-4cb3-b3b9-9175c78019d9" (UID: "fb8bb2be-991d-4cb3-b3b9-9175c78019d9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.784840 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-kube-api-access-w4k6p" (OuterVolumeSpecName: "kube-api-access-w4k6p") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "kube-api-access-w4k6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.789895 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" (UID: "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.790180 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5aca675e-bb76-4588-b998-c26393dd5ab6" (UID: "5aca675e-bb76-4588-b998-c26393dd5ab6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.791549 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" (UID: "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.796180 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-kube-api-access-nzfgz" (OuterVolumeSpecName: "kube-api-access-nzfgz") pod "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" (UID: "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0"). InnerVolumeSpecName "kube-api-access-nzfgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.799846 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kube-api-access-l45kr" (OuterVolumeSpecName: "kube-api-access-l45kr") pod "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" (UID: "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3"). InnerVolumeSpecName "kube-api-access-l45kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.808782 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-kube-api-access-kvp2h" (OuterVolumeSpecName: "kube-api-access-kvp2h") pod "fb8bb2be-991d-4cb3-b3b9-9175c78019d9" (UID: "fb8bb2be-991d-4cb3-b3b9-9175c78019d9"). InnerVolumeSpecName "kube-api-access-kvp2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.834085 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-scripts" (OuterVolumeSpecName: "scripts") pod "5aca675e-bb76-4588-b998-c26393dd5ab6" (UID: "5aca675e-bb76-4588-b998-c26393dd5ab6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.852076 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.864727 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.869546 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvp2h\" (UniqueName: \"kubernetes.io/projected/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-kube-api-access-kvp2h\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.869689 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.869754 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.870040 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.870135 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.870199 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4k6p\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-kube-api-access-w4k6p\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.870475 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.870817 4780 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.871132 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xcsm\" (UniqueName: \"kubernetes.io/projected/5aca675e-bb76-4588-b998-c26393dd5ab6-kube-api-access-7xcsm\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.871315 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzfgz\" (UniqueName: \"kubernetes.io/projected/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-kube-api-access-nzfgz\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.871402 4780 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.871471 4780 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.871537 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l45kr\" (UniqueName: \"kubernetes.io/projected/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-kube-api-access-l45kr\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.871607 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.871678 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5aca675e-bb76-4588-b998-c26393dd5ab6-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.871746 4780 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.871819 4780 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.880425 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.881609 4780 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.886064 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.883778 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" (UID: "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.883912 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.884224 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.875008 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.889662 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.891057 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.891133 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.894062 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.896799 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:13 crc kubenswrapper[4780]: E1205 07:11:13.896874 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovs-vswitchd" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.914233 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.958480 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" (UID: "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.989555 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.991746 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.991805 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.991847 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 05 07:11:13 crc kubenswrapper[4780]: I1205 07:11:13.999515 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.011901 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.032048 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.032121 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5aca675e-bb76-4588-b998-c26393dd5ab6" (UID: "5aca675e-bb76-4588-b998-c26393dd5ab6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.043041 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.058034 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.063191 4780 generic.go:334] "Generic (PLEG): container finished" podID="6b6e1d3b-503e-49c8-8d33-bcaae571525c" containerID="ff8d2505ef43e90e4d46fce0edfc4dfb01e6e9614b534d8d62b3dd427342f259" exitCode=1 Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.063306 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell079a0-account-delete-dfjw8" event={"ID":"6b6e1d3b-503e-49c8-8d33-bcaae571525c","Type":"ContainerDied","Data":"ff8d2505ef43e90e4d46fce0edfc4dfb01e6e9614b534d8d62b3dd427342f259"} Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.064179 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.067708 4780 generic.go:334] "Generic (PLEG): container finished" podID="202ef989-0cbf-4120-8621-11201cfe3d64" containerID="91701c97c22ff2ea451afa0cad7afc3e5ddb643d5a29da63d7bfd2ff2e6e4159" exitCode=1 Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.067791 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance4824-account-delete-4st4x" event={"ID":"202ef989-0cbf-4120-8621-11201cfe3d64","Type":"ContainerDied","Data":"91701c97c22ff2ea451afa0cad7afc3e5ddb643d5a29da63d7bfd2ff2e6e4159"} Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.070533 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.070678 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb8bb2be-991d-4cb3-b3b9-9175c78019d9" (UID: "fb8bb2be-991d-4cb3-b3b9-9175c78019d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.080736 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fs2vs_52ebc417-5adb-4ac6-9b5c-6f065fc4afe0/ovn-controller/0.log" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.080843 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fs2vs" event={"ID":"52ebc417-5adb-4ac6-9b5c-6f065fc4afe0","Type":"ContainerDied","Data":"c56ceaa09feff50252926c6530e388c38cc7259afb3917f4c519acbf12b1c74d"} Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.080973 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fs2vs" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.093294 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.093331 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.093340 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.097279 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.110067 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.126942 4780 generic.go:334] "Generic (PLEG): container finished" podID="9c542de0-85ab-43f2-89ca-fb8a6c19e49d" containerID="11051712969c213f12d3694e8f67294cafd2ca56902d29769c252f1281c88535" exitCode=1 Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.127009 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder1be5-account-delete-6mpnc" event={"ID":"9c542de0-85ab-43f2-89ca-fb8a6c19e49d","Type":"ContainerDied","Data":"11051712969c213f12d3694e8f67294cafd2ca56902d29769c252f1281c88535"} Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.133914 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c32a219-7b72-4302-8cc4-b9f11a672e8d" (UID: "7c32a219-7b72-4302-8cc4-b9f11a672e8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.146524 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.160311 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffce971d-fa60-450d-a347-29ba2a9c9c84" (UID: "ffce971d-fa60-450d-a347-29ba2a9c9c84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.162540 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0263e19c-beda-4939-84f0-f5baf54923a5" path="/var/lib/kubelet/pods/0263e19c-beda-4939-84f0-f5baf54923a5/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.163591 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fb26cab-c196-4a45-8ba3-2d9066683eaa" path="/var/lib/kubelet/pods/0fb26cab-c196-4a45-8ba3-2d9066683eaa/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.164176 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b3d441-7101-46f0-8bcc-5ae9352dfa6c" path="/var/lib/kubelet/pods/11b3d441-7101-46f0-8bcc-5ae9352dfa6c/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.164644 4780 generic.go:334] "Generic (PLEG): container finished" podID="52234708-ef2b-40c7-af1b-61e1890dd674" containerID="10b0a50ca124c4ad9f3abfe8602438f2b47b5b7592ca781be79d4c8a693c9509" exitCode=1 Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.165774 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a294e09-ff41-4fcc-81f4-2a674c77c239" path="/var/lib/kubelet/pods/2a294e09-ff41-4fcc-81f4-2a674c77c239/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.169418 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33af7252-1228-4051-bab0-cfcaee04fe1d" path="/var/lib/kubelet/pods/33af7252-1228-4051-bab0-cfcaee04fe1d/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.169822 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-799c48f5f4-sm7kz" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.155:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.170007 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c681b8-252b-4d1a-8293-27528bc83ed8" path="/var/lib/kubelet/pods/43c681b8-252b-4d1a-8293-27528bc83ed8/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.170198 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-799c48f5f4-sm7kz" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.155:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.171350 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621ea4dd-7bc5-4404-9369-1cd99335155d" path="/var/lib/kubelet/pods/621ea4dd-7bc5-4404-9369-1cd99335155d/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.171960 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a53d09-12f2-4488-814f-47114ab22120" path="/var/lib/kubelet/pods/89a53d09-12f2-4488-814f-47114ab22120/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.172458 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" path="/var/lib/kubelet/pods/a6b8df94-a979-4c1a-bffd-5f5052f0ad12/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.173513 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb4dcef-f976-4800-9e85-c59617b30727" path="/var/lib/kubelet/pods/acb4dcef-f976-4800-9e85-c59617b30727/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.174041 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" path="/var/lib/kubelet/pods/c381b4ec-8b36-4a3d-8e07-dbbc3a021f11/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.174620 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" path="/var/lib/kubelet/pods/cf87b821-f0c0-41df-a1ee-f2c44a09cc82/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.175855 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" path="/var/lib/kubelet/pods/d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.176580 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" path="/var/lib/kubelet/pods/e0f8b72a-b08b-4c2f-98dc-242016b6f846/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.177180 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c96ddb-0a87-4ae3-8676-03bc8afaf100" path="/var/lib/kubelet/pods/e4c96ddb-0a87-4ae3-8676-03bc8afaf100/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.178833 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9395104-b579-44d5-bbf0-69fe4d17406d" path="/var/lib/kubelet/pods/e9395104-b579-44d5-bbf0-69fe4d17406d/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.179569 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06e0616-87ca-48d5-9738-e92e1edb2ac5" path="/var/lib/kubelet/pods/f06e0616-87ca-48d5-9738-e92e1edb2ac5/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.180110 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c" path="/var/lib/kubelet/pods/fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c/volumes" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.196344 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.196371 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.196381 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.200016 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fb8bb2be-991d-4cb3-b3b9-9175c78019d9" (UID: "fb8bb2be-991d-4cb3-b3b9-9175c78019d9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.208205 4780 generic.go:334] "Generic (PLEG): container finished" podID="5356607a-a085-4294-8d0a-22c641259745" containerID="5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629" exitCode=0 Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.212982 4780 generic.go:334] "Generic (PLEG): container finished" podID="72765495-c470-41a5-b5a7-423025bdd6a7" containerID="444c40197ae7564eda15c0117f5e4feca257f0f17b552b7ddfbee889695f73c6" exitCode=1 Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.224217 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c9f9456b6-zflhk" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.230851 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data" (OuterVolumeSpecName: "config-data") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.235102 4780 generic.go:334] "Generic (PLEG): container finished" podID="574be54a-bbce-4f37-93b1-c9de6f1d0f4e" containerID="e4240881e126d56f6ebdaaa11d4b308525ebc43968e845d7a10583011da374f1" exitCode=1 Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.238457 4780 generic.go:334] "Generic (PLEG): container finished" podID="c269c975-543e-44e0-ac7a-abf3f7a619dd" containerID="c8898218e1adcd6b359590538dbb0f9623e640d8bf783b1fc90ff528f10abd46" exitCode=1 Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.238577 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.241910 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.243097 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.243331 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.244649 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.246191 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.246551 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.246750 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.297479 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.297507 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.352194 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-config-data" (OuterVolumeSpecName: "config-data") pod "fb8bb2be-991d-4cb3-b3b9-9175c78019d9" (UID: "fb8bb2be-991d-4cb3-b3b9-9175c78019d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.357019 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-server-conf" (OuterVolumeSpecName: "server-conf") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.364179 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data" (OuterVolumeSpecName: "config-data") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.399598 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.399632 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.399643 4780 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.402491 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "7c32a219-7b72-4302-8cc4-b9f11a672e8d" (UID: "7c32a219-7b72-4302-8cc4-b9f11a672e8d"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.442836 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-server-conf" (OuterVolumeSpecName: "server-conf") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.456039 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" (UID: "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.464167 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-fc64465bd-vwr2q"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.464385 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-fc64465bd-vwr2q"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.464444 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapiebdd-account-delete-2px9p" event={"ID":"52234708-ef2b-40c7-af1b-61e1890dd674","Type":"ContainerDied","Data":"10b0a50ca124c4ad9f3abfe8602438f2b47b5b7592ca781be79d4c8a693c9509"} Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.468302 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fb8bb2be-991d-4cb3-b3b9-9175c78019d9" (UID: "fb8bb2be-991d-4cb3-b3b9-9175c78019d9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.469275 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-59d58fb65c-nzf5k"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.469312 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-59d58fb65c-nzf5k"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.469337 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqdm6" event={"ID":"5356607a-a085-4294-8d0a-22c641259745","Type":"ContainerDied","Data":"5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629"} Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.469363 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican796f-account-delete-h5ds7" event={"ID":"72765495-c470-41a5-b5a7-423025bdd6a7","Type":"ContainerDied","Data":"444c40197ae7564eda15c0117f5e4feca257f0f17b552b7ddfbee889695f73c6"} Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.469386 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c9f9456b6-zflhk" event={"ID":"fb8bb2be-991d-4cb3-b3b9-9175c78019d9","Type":"ContainerDied","Data":"38735d60427b27a0b905ae0c484929bc6404f9b55ce5830ca95e81314848b8e2"} Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.469417 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutrona927-account-delete-5chq6" event={"ID":"574be54a-bbce-4f37-93b1-c9de6f1d0f4e","Type":"ContainerDied","Data":"e4240881e126d56f6ebdaaa11d4b308525ebc43968e845d7a10583011da374f1"} Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.469437 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa463-account-delete-wsgnm" event={"ID":"c269c975-543e-44e0-ac7a-abf3f7a619dd","Type":"ContainerDied","Data":"c8898218e1adcd6b359590538dbb0f9623e640d8bf783b1fc90ff528f10abd46"} Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.492905 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.498526 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "ffce971d-fa60-450d-a347-29ba2a9c9c84" (UID: "ffce971d-fa60-450d-a347-29ba2a9c9c84"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.501654 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb8bb2be-991d-4cb3-b3b9-9175c78019d9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.501736 4780 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5032d09-8298-4941-8b4b-0f24a57b8ced-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.501749 4780 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.501760 4780 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c32a219-7b72-4302-8cc4-b9f11a672e8d-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.501771 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.509575 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.526259 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" (UID: "885ecc9e-e70a-4d6e-ab6b-f82e46be61a3"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.551260 4780 scope.go:117] "RemoveContainer" containerID="3f96e5ef3fbb0acd20f1bcd74508b2612b79eecee120455dd0bc859e46d3b5c7" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.551658 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5aca675e-bb76-4588-b998-c26393dd5ab6" (UID: "5aca675e-bb76-4588-b998-c26393dd5ab6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.551815 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f5032d09-8298-4941-8b4b-0f24a57b8ced" (UID: "f5032d09-8298-4941-8b4b-0f24a57b8ced"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.556989 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.569118 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.571939 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-config-data" (OuterVolumeSpecName: "config-data") pod "5aca675e-bb76-4588-b998-c26393dd5ab6" (UID: "5aca675e-bb76-4588-b998-c26393dd5ab6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.572170 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "ffce971d-fa60-450d-a347-29ba2a9c9c84" (UID: "ffce971d-fa60-450d-a347-29ba2a9c9c84"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.585023 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.606807 4780 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.607663 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.607747 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5032d09-8298-4941-8b4b-0f24a57b8ced-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.607912 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffce971d-fa60-450d-a347-29ba2a9c9c84-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.608021 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: E1205 07:11:14.608260 4780 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 05 07:11:14 crc kubenswrapper[4780]: E1205 07:11:14.608472 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts podName:52234708-ef2b-40c7-af1b-61e1890dd674 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:18.608454882 +0000 UTC m=+1512.677971214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts") pod "novaapiebdd-account-delete-2px9p" (UID: "52234708-ef2b-40c7-af1b-61e1890dd674") : configmap "openstack-scripts" not found Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.612066 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5aca675e-bb76-4588-b998-c26393dd5ab6" (UID: "5aca675e-bb76-4588-b998-c26393dd5ab6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.612325 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.623259 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5c9f9456b6-zflhk"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.641962 4780 scope.go:117] "RemoveContainer" containerID="0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.659583 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" (UID: "52ebc417-5adb-4ac6-9b5c-6f065fc4afe0"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.667717 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5c9f9456b6-zflhk"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.696608 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.696663 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.676934 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell079a0-account-delete-dfjw8" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.695149 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance4824-account-delete-4st4x" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.712916 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aca675e-bb76-4588-b998-c26393dd5ab6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.712955 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.713496 4780 scope.go:117] "RemoveContainer" containerID="e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.747276 4780 scope.go:117] "RemoveContainer" containerID="0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6" Dec 05 07:11:14 crc kubenswrapper[4780]: E1205 07:11:14.750710 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6\": container with ID starting with 0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6 not found: ID does not exist" containerID="0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.750765 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6"} err="failed to get container status \"0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6\": rpc error: code = NotFound desc = could not find container \"0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6\": container with ID starting with 0292fc92043fe7e7f2b25f425657d70930ff63d52b7169de7c75af503047b0a6 not found: ID does not exist" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.750800 4780 scope.go:117] "RemoveContainer" containerID="e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208" Dec 05 07:11:14 crc kubenswrapper[4780]: E1205 07:11:14.751140 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208\": container with ID starting with e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208 not found: ID does not exist" containerID="e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.751278 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208"} err="failed to get container status \"e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208\": rpc error: code = NotFound desc = could not find container \"e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208\": container with ID starting with e1b88bb6f5c6c6372b8d4da609d104eab8f76980f3004d2faf16453c40326208 not found: ID does not exist" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.751374 4780 scope.go:117] "RemoveContainer" containerID="7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.765194 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" (UID: "1e6efd4f-660c-44e1-bf69-8b1cec6a6e85"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.768499 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fs2vs"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.772786 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fs2vs"] Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.783309 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapiebdd-account-delete-2px9p" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.801150 4780 scope.go:117] "RemoveContainer" containerID="091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.813489 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts\") pod \"202ef989-0cbf-4120-8621-11201cfe3d64\" (UID: \"202ef989-0cbf-4120-8621-11201cfe3d64\") " Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.813727 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvnrt\" (UniqueName: \"kubernetes.io/projected/202ef989-0cbf-4120-8621-11201cfe3d64-kube-api-access-kvnrt\") pod \"202ef989-0cbf-4120-8621-11201cfe3d64\" (UID: \"202ef989-0cbf-4120-8621-11201cfe3d64\") " Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.813751 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhx99\" (UniqueName: \"kubernetes.io/projected/6b6e1d3b-503e-49c8-8d33-bcaae571525c-kube-api-access-xhx99\") pod \"6b6e1d3b-503e-49c8-8d33-bcaae571525c\" (UID: \"6b6e1d3b-503e-49c8-8d33-bcaae571525c\") " Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.813774 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts\") pod \"6b6e1d3b-503e-49c8-8d33-bcaae571525c\" (UID: \"6b6e1d3b-503e-49c8-8d33-bcaae571525c\") " Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.814846 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.815595 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b6e1d3b-503e-49c8-8d33-bcaae571525c" (UID: "6b6e1d3b-503e-49c8-8d33-bcaae571525c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.817564 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "202ef989-0cbf-4120-8621-11201cfe3d64" (UID: "202ef989-0cbf-4120-8621-11201cfe3d64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.823780 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6e1d3b-503e-49c8-8d33-bcaae571525c-kube-api-access-xhx99" (OuterVolumeSpecName: "kube-api-access-xhx99") pod "6b6e1d3b-503e-49c8-8d33-bcaae571525c" (UID: "6b6e1d3b-503e-49c8-8d33-bcaae571525c"). InnerVolumeSpecName "kube-api-access-xhx99". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.826863 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202ef989-0cbf-4120-8621-11201cfe3d64-kube-api-access-kvnrt" (OuterVolumeSpecName: "kube-api-access-kvnrt") pod "202ef989-0cbf-4120-8621-11201cfe3d64" (UID: "202ef989-0cbf-4120-8621-11201cfe3d64"). InnerVolumeSpecName "kube-api-access-kvnrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.916189 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv8rj\" (UniqueName: \"kubernetes.io/projected/52234708-ef2b-40c7-af1b-61e1890dd674-kube-api-access-vv8rj\") pod \"52234708-ef2b-40c7-af1b-61e1890dd674\" (UID: \"52234708-ef2b-40c7-af1b-61e1890dd674\") " Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.916296 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts\") pod \"52234708-ef2b-40c7-af1b-61e1890dd674\" (UID: \"52234708-ef2b-40c7-af1b-61e1890dd674\") " Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.917237 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/202ef989-0cbf-4120-8621-11201cfe3d64-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.917264 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvnrt\" (UniqueName: \"kubernetes.io/projected/202ef989-0cbf-4120-8621-11201cfe3d64-kube-api-access-kvnrt\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.917278 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhx99\" (UniqueName: \"kubernetes.io/projected/6b6e1d3b-503e-49c8-8d33-bcaae571525c-kube-api-access-xhx99\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.917291 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6e1d3b-503e-49c8-8d33-bcaae571525c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.918174 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52234708-ef2b-40c7-af1b-61e1890dd674" (UID: "52234708-ef2b-40c7-af1b-61e1890dd674"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.921332 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52234708-ef2b-40c7-af1b-61e1890dd674-kube-api-access-vv8rj" (OuterVolumeSpecName: "kube-api-access-vv8rj") pod "52234708-ef2b-40c7-af1b-61e1890dd674" (UID: "52234708-ef2b-40c7-af1b-61e1890dd674"). InnerVolumeSpecName "kube-api-access-vv8rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:14 crc kubenswrapper[4780]: I1205 07:11:14.999005 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder1be5-account-delete-6mpnc" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.020060 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv8rj\" (UniqueName: \"kubernetes.io/projected/52234708-ef2b-40c7-af1b-61e1890dd674-kube-api-access-vv8rj\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.020090 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52234708-ef2b-40c7-af1b-61e1890dd674-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.050282 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.059988 4780 scope.go:117] "RemoveContainer" containerID="7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.060162 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 07:11:15 crc kubenswrapper[4780]: E1205 07:11:15.064062 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11\": container with ID starting with 7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11 not found: ID does not exist" containerID="7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.064122 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11"} err="failed to get container status \"7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11\": rpc error: code = NotFound desc = could not find container \"7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11\": container with ID starting with 7f3330fd8c2a04c81ce715432976105f0958a2953d410ff5514ea014316b0c11 not found: ID does not exist" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.064157 4780 scope.go:117] "RemoveContainer" containerID="091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21" Dec 05 07:11:15 crc kubenswrapper[4780]: E1205 07:11:15.064549 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21\": container with ID starting with 091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21 not found: ID does not exist" containerID="091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.064600 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21"} err="failed to get container status \"091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21\": rpc error: code = NotFound desc = could not find container \"091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21\": container with ID starting with 091dcbce4c0c863bde1848e503cd5498e3ba00d7f924674c30e373ca8b08bb21 not found: ID does not exist" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.064628 4780 scope.go:117] "RemoveContainer" containerID="d31b368abd1529e65ca57a055a89980dc4162f66182c47922e838d0205e61994" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.071117 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.082647 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.104769 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.112174 4780 scope.go:117] "RemoveContainer" containerID="6733da8d639a7996b7e7eb99726131bb8bf0c04d0e15d210e53fa012587cc24c" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.117235 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.121673 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts\") pod \"9c542de0-85ab-43f2-89ca-fb8a6c19e49d\" (UID: \"9c542de0-85ab-43f2-89ca-fb8a6c19e49d\") " Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.121953 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfstv\" (UniqueName: \"kubernetes.io/projected/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-kube-api-access-gfstv\") pod \"9c542de0-85ab-43f2-89ca-fb8a6c19e49d\" (UID: \"9c542de0-85ab-43f2-89ca-fb8a6c19e49d\") " Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.123286 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c542de0-85ab-43f2-89ca-fb8a6c19e49d" (UID: "9c542de0-85ab-43f2-89ca-fb8a6c19e49d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.126947 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.136619 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-kube-api-access-gfstv" (OuterVolumeSpecName: "kube-api-access-gfstv") pod "9c542de0-85ab-43f2-89ca-fb8a6c19e49d" (UID: "9c542de0-85ab-43f2-89ca-fb8a6c19e49d"). InnerVolumeSpecName "kube-api-access-gfstv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.137710 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.163342 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementa463-account-delete-wsgnm" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.168790 4780 scope.go:117] "RemoveContainer" containerID="fc4ab1bdd9450d2793d03d7cfe2dd694a4092653e1a82515aa096c88a796d8ba" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.176367 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutrona927-account-delete-5chq6" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.201767 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican796f-account-delete-h5ds7" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.226521 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq7gs\" (UniqueName: \"kubernetes.io/projected/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-kube-api-access-bq7gs\") pod \"574be54a-bbce-4f37-93b1-c9de6f1d0f4e\" (UID: \"574be54a-bbce-4f37-93b1-c9de6f1d0f4e\") " Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.226573 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkvgj\" (UniqueName: \"kubernetes.io/projected/72765495-c470-41a5-b5a7-423025bdd6a7-kube-api-access-vkvgj\") pod \"72765495-c470-41a5-b5a7-423025bdd6a7\" (UID: \"72765495-c470-41a5-b5a7-423025bdd6a7\") " Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.226619 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c269c975-543e-44e0-ac7a-abf3f7a619dd-operator-scripts\") pod \"c269c975-543e-44e0-ac7a-abf3f7a619dd\" (UID: \"c269c975-543e-44e0-ac7a-abf3f7a619dd\") " Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.226658 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts\") pod \"574be54a-bbce-4f37-93b1-c9de6f1d0f4e\" (UID: \"574be54a-bbce-4f37-93b1-c9de6f1d0f4e\") " Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.226719 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44g4d\" (UniqueName: \"kubernetes.io/projected/c269c975-543e-44e0-ac7a-abf3f7a619dd-kube-api-access-44g4d\") pod \"c269c975-543e-44e0-ac7a-abf3f7a619dd\" (UID: \"c269c975-543e-44e0-ac7a-abf3f7a619dd\") " Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.228249 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfstv\" (UniqueName: \"kubernetes.io/projected/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-kube-api-access-gfstv\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.228269 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c542de0-85ab-43f2-89ca-fb8a6c19e49d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.231239 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c269c975-543e-44e0-ac7a-abf3f7a619dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c269c975-543e-44e0-ac7a-abf3f7a619dd" (UID: "c269c975-543e-44e0-ac7a-abf3f7a619dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.232010 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-kube-api-access-bq7gs" (OuterVolumeSpecName: "kube-api-access-bq7gs") pod "574be54a-bbce-4f37-93b1-c9de6f1d0f4e" (UID: "574be54a-bbce-4f37-93b1-c9de6f1d0f4e"). InnerVolumeSpecName "kube-api-access-bq7gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.233355 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "574be54a-bbce-4f37-93b1-c9de6f1d0f4e" (UID: "574be54a-bbce-4f37-93b1-c9de6f1d0f4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.236308 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72765495-c470-41a5-b5a7-423025bdd6a7-kube-api-access-vkvgj" (OuterVolumeSpecName: "kube-api-access-vkvgj") pod "72765495-c470-41a5-b5a7-423025bdd6a7" (UID: "72765495-c470-41a5-b5a7-423025bdd6a7"). InnerVolumeSpecName "kube-api-access-vkvgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.237033 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c269c975-543e-44e0-ac7a-abf3f7a619dd-kube-api-access-44g4d" (OuterVolumeSpecName: "kube-api-access-44g4d") pod "c269c975-543e-44e0-ac7a-abf3f7a619dd" (UID: "c269c975-543e-44e0-ac7a-abf3f7a619dd"). InnerVolumeSpecName "kube-api-access-44g4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.239913 4780 scope.go:117] "RemoveContainer" containerID="39bdef87adcda6005206d39eb2fa9b742fb09f571d1b28131f494c011a73a518" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.291667 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance4824-account-delete-4st4x" event={"ID":"202ef989-0cbf-4120-8621-11201cfe3d64","Type":"ContainerDied","Data":"b462c90b56a40d7557ce2f206ed6edc5449e261c1776ec03e6ec26fe44bd4b8c"} Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.291752 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance4824-account-delete-4st4x" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.301066 4780 scope.go:117] "RemoveContainer" containerID="f26a658dbc0f16fa4268a778cc2e72e57c54ffefa2863c5a3f9e4202f590a60b" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.316573 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican796f-account-delete-h5ds7" event={"ID":"72765495-c470-41a5-b5a7-423025bdd6a7","Type":"ContainerDied","Data":"ae01a9cb16792f9fade952477b0f5fe02bbc8f962121d2e0d047909e8a5bbe43"} Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.316673 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican796f-account-delete-h5ds7" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.328870 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts\") pod \"72765495-c470-41a5-b5a7-423025bdd6a7\" (UID: \"72765495-c470-41a5-b5a7-423025bdd6a7\") " Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.329280 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44g4d\" (UniqueName: \"kubernetes.io/projected/c269c975-543e-44e0-ac7a-abf3f7a619dd-kube-api-access-44g4d\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.329299 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq7gs\" (UniqueName: \"kubernetes.io/projected/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-kube-api-access-bq7gs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.329310 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkvgj\" (UniqueName: \"kubernetes.io/projected/72765495-c470-41a5-b5a7-423025bdd6a7-kube-api-access-vkvgj\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.329320 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c269c975-543e-44e0-ac7a-abf3f7a619dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.329329 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574be54a-bbce-4f37-93b1-c9de6f1d0f4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.329693 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72765495-c470-41a5-b5a7-423025bdd6a7" (UID: "72765495-c470-41a5-b5a7-423025bdd6a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.334007 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance4824-account-delete-4st4x"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.350678 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder1be5-account-delete-6mpnc" event={"ID":"9c542de0-85ab-43f2-89ca-fb8a6c19e49d","Type":"ContainerDied","Data":"fe21a811d663618b9fd149884fbb733768d2c4ab226581c500612dc7d36f0541"} Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.350782 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder1be5-account-delete-6mpnc" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.357846 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance4824-account-delete-4st4x"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.365323 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutrona927-account-delete-5chq6" event={"ID":"574be54a-bbce-4f37-93b1-c9de6f1d0f4e","Type":"ContainerDied","Data":"fa99f2a2d4d4d8e7399588071249075c46eaaa6972a5dfa29dd9a0119560f10e"} Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.365438 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutrona927-account-delete-5chq6" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.366385 4780 scope.go:117] "RemoveContainer" containerID="409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.382035 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementa463-account-delete-wsgnm" event={"ID":"c269c975-543e-44e0-ac7a-abf3f7a619dd","Type":"ContainerDied","Data":"5e2307920b476147e02bff65c8ad2fb39f274b042f6d2f22124c7288a75d6831"} Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.382123 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementa463-account-delete-wsgnm" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.390075 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapiebdd-account-delete-2px9p" event={"ID":"52234708-ef2b-40c7-af1b-61e1890dd674","Type":"ContainerDied","Data":"8f280b141a81b2c397d24ff42404020ef0e106f0121446dbc46d2b0de4544e72"} Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.390309 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapiebdd-account-delete-2px9p" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.398493 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell079a0-account-delete-dfjw8" event={"ID":"6b6e1d3b-503e-49c8-8d33-bcaae571525c","Type":"ContainerDied","Data":"5f8287036f1e695574b04f10fd9553469b310f47439e97038098bede213305c9"} Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.398570 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell079a0-account-delete-dfjw8" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.419788 4780 scope.go:117] "RemoveContainer" containerID="e2272792c63e1f2159b6320d0d6009da818ee74ca29907f48e07464acca5482a" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.438407 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72765495-c470-41a5-b5a7-423025bdd6a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.526573 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementa463-account-delete-wsgnm"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.556305 4780 scope.go:117] "RemoveContainer" containerID="530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.556635 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placementa463-account-delete-wsgnm"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.564751 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutrona927-account-delete-5chq6"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.577953 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutrona927-account-delete-5chq6"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.585763 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapiebdd-account-delete-2px9p"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.594854 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapiebdd-account-delete-2px9p"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.604927 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder1be5-account-delete-6mpnc"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.614912 4780 scope.go:117] "RemoveContainer" containerID="91701c97c22ff2ea451afa0cad7afc3e5ddb643d5a29da63d7bfd2ff2e6e4159" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.622979 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder1be5-account-delete-6mpnc"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.631982 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell079a0-account-delete-dfjw8"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.640960 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell079a0-account-delete-dfjw8"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.684785 4780 scope.go:117] "RemoveContainer" containerID="444c40197ae7564eda15c0117f5e4feca257f0f17b552b7ddfbee889695f73c6" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.690396 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican796f-account-delete-h5ds7"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.701732 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican796f-account-delete-h5ds7"] Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.729855 4780 scope.go:117] "RemoveContainer" containerID="409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21" Dec 05 07:11:15 crc kubenswrapper[4780]: E1205 07:11:15.730423 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21\": container with ID starting with 409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21 not found: ID does not exist" containerID="409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.730534 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21"} err="failed to get container status \"409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21\": rpc error: code = NotFound desc = could not find container \"409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21\": container with ID starting with 409e07cfe508b05fb4190db025b0cc07d8c8a338d79e2cdebe385eee2f517f21 not found: ID does not exist" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.730625 4780 scope.go:117] "RemoveContainer" containerID="11051712969c213f12d3694e8f67294cafd2ca56902d29769c252f1281c88535" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.791080 4780 scope.go:117] "RemoveContainer" containerID="e4240881e126d56f6ebdaaa11d4b308525ebc43968e845d7a10583011da374f1" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.908459 4780 scope.go:117] "RemoveContainer" containerID="530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465" Dec 05 07:11:15 crc kubenswrapper[4780]: E1205 07:11:15.912017 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465\": container with ID starting with 530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465 not found: ID does not exist" containerID="530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.912195 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465"} err="failed to get container status \"530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465\": rpc error: code = NotFound desc = could not find container \"530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465\": container with ID starting with 530328ec2be300ec61227d59b3b9c7b04494b482b0f105c64e510bb5aec6f465 not found: ID does not exist" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.912274 4780 scope.go:117] "RemoveContainer" containerID="c8898218e1adcd6b359590538dbb0f9623e640d8bf783b1fc90ff528f10abd46" Dec 05 07:11:15 crc kubenswrapper[4780]: I1205 07:11:15.942059 4780 scope.go:117] "RemoveContainer" containerID="10b0a50ca124c4ad9f3abfe8602438f2b47b5b7592ca781be79d4c8a693c9509" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.007052 4780 scope.go:117] "RemoveContainer" containerID="ff8d2505ef43e90e4d46fce0edfc4dfb01e6e9614b534d8d62b3dd427342f259" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.148377 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" path="/var/lib/kubelet/pods/1e6efd4f-660c-44e1-bf69-8b1cec6a6e85/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.148999 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202ef989-0cbf-4120-8621-11201cfe3d64" path="/var/lib/kubelet/pods/202ef989-0cbf-4120-8621-11201cfe3d64/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.149478 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f97591-4528-4ed0-918c-b6de191c452a" path="/var/lib/kubelet/pods/29f97591-4528-4ed0-918c-b6de191c452a/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.150476 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52234708-ef2b-40c7-af1b-61e1890dd674" path="/var/lib/kubelet/pods/52234708-ef2b-40c7-af1b-61e1890dd674/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.152012 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" path="/var/lib/kubelet/pods/52ebc417-5adb-4ac6-9b5c-6f065fc4afe0/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.152942 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574be54a-bbce-4f37-93b1-c9de6f1d0f4e" path="/var/lib/kubelet/pods/574be54a-bbce-4f37-93b1-c9de6f1d0f4e/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.154643 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" path="/var/lib/kubelet/pods/5aca675e-bb76-4588-b998-c26393dd5ab6/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.156027 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6e1d3b-503e-49c8-8d33-bcaae571525c" path="/var/lib/kubelet/pods/6b6e1d3b-503e-49c8-8d33-bcaae571525c/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.156814 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72765495-c470-41a5-b5a7-423025bdd6a7" path="/var/lib/kubelet/pods/72765495-c470-41a5-b5a7-423025bdd6a7/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.158111 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c32a219-7b72-4302-8cc4-b9f11a672e8d" path="/var/lib/kubelet/pods/7c32a219-7b72-4302-8cc4-b9f11a672e8d/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.159022 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" path="/var/lib/kubelet/pods/885ecc9e-e70a-4d6e-ab6b-f82e46be61a3/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.160088 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: i/o timeout" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.160580 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" path="/var/lib/kubelet/pods/8d9c218c-8cf4-468d-a946-bb14fc0024b0/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.161564 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c542de0-85ab-43f2-89ca-fb8a6c19e49d" path="/var/lib/kubelet/pods/9c542de0-85ab-43f2-89ca-fb8a6c19e49d/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.162376 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" path="/var/lib/kubelet/pods/aa86c0d1-d6cb-4566-b4b3-352c690b0a96/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.164131 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c269c975-543e-44e0-ac7a-abf3f7a619dd" path="/var/lib/kubelet/pods/c269c975-543e-44e0-ac7a-abf3f7a619dd/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.164781 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfe98bcd-7b01-4246-9879-15ed51cf7a1f" path="/var/lib/kubelet/pods/cfe98bcd-7b01-4246-9879-15ed51cf7a1f/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.165857 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5032d09-8298-4941-8b4b-0f24a57b8ced" path="/var/lib/kubelet/pods/f5032d09-8298-4941-8b4b-0f24a57b8ced/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.167108 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8bb2be-991d-4cb3-b3b9-9175c78019d9" path="/var/lib/kubelet/pods/fb8bb2be-991d-4cb3-b3b9-9175c78019d9/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.167955 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" path="/var/lib/kubelet/pods/ffce971d-fa60-450d-a347-29ba2a9c9c84/volumes" Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.416949 4780 generic.go:334] "Generic (PLEG): container finished" podID="5356607a-a085-4294-8d0a-22c641259745" containerID="157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a" exitCode=0 Dec 05 07:11:16 crc kubenswrapper[4780]: I1205 07:11:16.416995 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqdm6" event={"ID":"5356607a-a085-4294-8d0a-22c641259745","Type":"ContainerDied","Data":"157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a"} Dec 05 07:11:18 crc kubenswrapper[4780]: I1205 07:11:18.458752 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqdm6" event={"ID":"5356607a-a085-4294-8d0a-22c641259745","Type":"ContainerStarted","Data":"a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232"} Dec 05 07:11:18 crc kubenswrapper[4780]: E1205 07:11:18.880208 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:18 crc kubenswrapper[4780]: E1205 07:11:18.881100 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:18 crc kubenswrapper[4780]: E1205 07:11:18.881573 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:18 crc kubenswrapper[4780]: E1205 07:11:18.881614 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:18 crc kubenswrapper[4780]: E1205 07:11:18.881956 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" Dec 05 07:11:18 crc kubenswrapper[4780]: E1205 07:11:18.883393 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:18 crc kubenswrapper[4780]: E1205 07:11:18.884500 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:18 crc kubenswrapper[4780]: E1205 07:11:18.884622 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovs-vswitchd" Dec 05 07:11:19 crc kubenswrapper[4780]: E1205 07:11:19.213830 4780 configmap.go:193] Couldn't get configMap openstack/ovncontroller-scripts: configmap "ovncontroller-scripts" not found Dec 05 07:11:19 crc kubenswrapper[4780]: E1205 07:11:19.213951 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts podName:52793d91-2b27-4926-9293-78f555401415 nodeName:}" failed. No retries permitted until 2025-12-05 07:11:35.213926554 +0000 UTC m=+1529.283442886 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts") pod "ovn-controller-ovs-lq2sf" (UID: "52793d91-2b27-4926-9293-78f555401415") : configmap "ovncontroller-scripts" not found Dec 05 07:11:23 crc kubenswrapper[4780]: E1205 07:11:23.882158 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:23 crc kubenswrapper[4780]: E1205 07:11:23.883190 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:23 crc kubenswrapper[4780]: E1205 07:11:23.883229 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:23 crc kubenswrapper[4780]: E1205 07:11:23.884337 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:23 crc kubenswrapper[4780]: E1205 07:11:23.884398 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" Dec 05 07:11:23 crc kubenswrapper[4780]: E1205 07:11:23.885215 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:23 crc kubenswrapper[4780]: E1205 07:11:23.886743 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:23 crc kubenswrapper[4780]: E1205 07:11:23.886785 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovs-vswitchd" Dec 05 07:11:24 crc kubenswrapper[4780]: I1205 07:11:24.880952 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:24 crc kubenswrapper[4780]: I1205 07:11:24.881347 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:24 crc kubenswrapper[4780]: I1205 07:11:24.931094 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:24 crc kubenswrapper[4780]: I1205 07:11:24.956967 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kqdm6" podStartSLOduration=17.471111473 podStartE2EDuration="20.956942098s" podCreationTimestamp="2025-12-05 07:11:04 +0000 UTC" firstStartedPulling="2025-12-05 07:11:14.210438349 +0000 UTC m=+1508.279954681" lastFinishedPulling="2025-12-05 07:11:17.696268974 +0000 UTC m=+1511.765785306" observedRunningTime="2025-12-05 07:11:18.482724682 +0000 UTC m=+1512.552241024" watchObservedRunningTime="2025-12-05 07:11:24.956942098 +0000 UTC m=+1519.026458430" Dec 05 07:11:25 crc kubenswrapper[4780]: I1205 07:11:25.577450 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:25 crc kubenswrapper[4780]: I1205 07:11:25.627363 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kqdm6"] Dec 05 07:11:27 crc kubenswrapper[4780]: I1205 07:11:27.546799 4780 generic.go:334] "Generic (PLEG): container finished" podID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerID="c64aac9dc2da1feacd133e1cbfed47f07ec40d71d95e0fe650627bb11646f1e3" exitCode=0 Dec 05 07:11:27 crc kubenswrapper[4780]: I1205 07:11:27.546894 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d68479b85-xqbrx" event={"ID":"e5443f43-c1d5-4563-a28c-63b54fd78ee6","Type":"ContainerDied","Data":"c64aac9dc2da1feacd133e1cbfed47f07ec40d71d95e0fe650627bb11646f1e3"} Dec 05 07:11:27 crc kubenswrapper[4780]: I1205 07:11:27.547418 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kqdm6" podUID="5356607a-a085-4294-8d0a-22c641259745" containerName="registry-server" containerID="cri-o://a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232" gracePeriod=2 Dec 05 07:11:27 crc kubenswrapper[4780]: I1205 07:11:27.964517 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.056576 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t42wq\" (UniqueName: \"kubernetes.io/projected/5356607a-a085-4294-8d0a-22c641259745-kube-api-access-t42wq\") pod \"5356607a-a085-4294-8d0a-22c641259745\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.057001 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-utilities\") pod \"5356607a-a085-4294-8d0a-22c641259745\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.057070 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-catalog-content\") pod \"5356607a-a085-4294-8d0a-22c641259745\" (UID: \"5356607a-a085-4294-8d0a-22c641259745\") " Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.058468 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-utilities" (OuterVolumeSpecName: "utilities") pod "5356607a-a085-4294-8d0a-22c641259745" (UID: "5356607a-a085-4294-8d0a-22c641259745"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.064608 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5356607a-a085-4294-8d0a-22c641259745-kube-api-access-t42wq" (OuterVolumeSpecName: "kube-api-access-t42wq") pod "5356607a-a085-4294-8d0a-22c641259745" (UID: "5356607a-a085-4294-8d0a-22c641259745"). InnerVolumeSpecName "kube-api-access-t42wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.119169 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5356607a-a085-4294-8d0a-22c641259745" (UID: "5356607a-a085-4294-8d0a-22c641259745"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.159256 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.159285 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t42wq\" (UniqueName: \"kubernetes.io/projected/5356607a-a085-4294-8d0a-22c641259745-kube-api-access-t42wq\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.159294 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5356607a-a085-4294-8d0a-22c641259745-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.279361 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.362947 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-config\") pod \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.363023 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-internal-tls-certs\") pod \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.363065 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-combined-ca-bundle\") pod \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.363089 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-ovndb-tls-certs\") pod \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.363156 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-public-tls-certs\") pod \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.363182 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-httpd-config\") pod \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.363218 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq62h\" (UniqueName: \"kubernetes.io/projected/e5443f43-c1d5-4563-a28c-63b54fd78ee6-kube-api-access-zq62h\") pod \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\" (UID: \"e5443f43-c1d5-4563-a28c-63b54fd78ee6\") " Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.367296 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e5443f43-c1d5-4563-a28c-63b54fd78ee6" (UID: "e5443f43-c1d5-4563-a28c-63b54fd78ee6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.367368 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5443f43-c1d5-4563-a28c-63b54fd78ee6-kube-api-access-zq62h" (OuterVolumeSpecName: "kube-api-access-zq62h") pod "e5443f43-c1d5-4563-a28c-63b54fd78ee6" (UID: "e5443f43-c1d5-4563-a28c-63b54fd78ee6"). InnerVolumeSpecName "kube-api-access-zq62h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.402614 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5443f43-c1d5-4563-a28c-63b54fd78ee6" (UID: "e5443f43-c1d5-4563-a28c-63b54fd78ee6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.402661 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-config" (OuterVolumeSpecName: "config") pod "e5443f43-c1d5-4563-a28c-63b54fd78ee6" (UID: "e5443f43-c1d5-4563-a28c-63b54fd78ee6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.404366 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e5443f43-c1d5-4563-a28c-63b54fd78ee6" (UID: "e5443f43-c1d5-4563-a28c-63b54fd78ee6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.423221 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e5443f43-c1d5-4563-a28c-63b54fd78ee6" (UID: "e5443f43-c1d5-4563-a28c-63b54fd78ee6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.434546 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e5443f43-c1d5-4563-a28c-63b54fd78ee6" (UID: "e5443f43-c1d5-4563-a28c-63b54fd78ee6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.464591 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.464633 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.464645 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq62h\" (UniqueName: \"kubernetes.io/projected/e5443f43-c1d5-4563-a28c-63b54fd78ee6-kube-api-access-zq62h\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.464655 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-config\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.464663 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.464671 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.464679 4780 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5443f43-c1d5-4563-a28c-63b54fd78ee6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.559974 4780 generic.go:334] "Generic (PLEG): container finished" podID="5356607a-a085-4294-8d0a-22c641259745" containerID="a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232" exitCode=0 Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.560037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqdm6" event={"ID":"5356607a-a085-4294-8d0a-22c641259745","Type":"ContainerDied","Data":"a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232"} Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.560066 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqdm6" event={"ID":"5356607a-a085-4294-8d0a-22c641259745","Type":"ContainerDied","Data":"4a9722cf339ad50786faf6f0b377a107fdf0f403c9198b1e08b15f16531b57a8"} Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.560082 4780 scope.go:117] "RemoveContainer" containerID="a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.560083 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqdm6" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.561898 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d68479b85-xqbrx" event={"ID":"e5443f43-c1d5-4563-a28c-63b54fd78ee6","Type":"ContainerDied","Data":"634aecf977af38d2cf9edc0f3e6837609def41508f63a163cb376f4cdf2c0cd2"} Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.561948 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d68479b85-xqbrx" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.595540 4780 scope.go:117] "RemoveContainer" containerID="157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.612409 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kqdm6"] Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.618652 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kqdm6"] Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.621782 4780 scope.go:117] "RemoveContainer" containerID="5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.625067 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d68479b85-xqbrx"] Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.631827 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7d68479b85-xqbrx"] Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.640747 4780 scope.go:117] "RemoveContainer" containerID="a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232" Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.641349 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232\": container with ID starting with a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232 not found: ID does not exist" containerID="a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.641392 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232"} err="failed to get container status \"a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232\": rpc error: code = NotFound desc = could not find container \"a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232\": container with ID starting with a637118ceb41c57a0e54b0214a4be70a4dbd7f8bd07333fb1acb323003869232 not found: ID does not exist" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.641417 4780 scope.go:117] "RemoveContainer" containerID="157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a" Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.641824 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a\": container with ID starting with 157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a not found: ID does not exist" containerID="157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.641909 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a"} err="failed to get container status \"157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a\": rpc error: code = NotFound desc = could not find container \"157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a\": container with ID starting with 157ba2e860f577d933e344a026a9dbf7a4440948971e69ebbf66c5de33c1b81a not found: ID does not exist" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.641951 4780 scope.go:117] "RemoveContainer" containerID="5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629" Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.642310 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629\": container with ID starting with 5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629 not found: ID does not exist" containerID="5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.642351 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629"} err="failed to get container status \"5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629\": rpc error: code = NotFound desc = could not find container \"5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629\": container with ID starting with 5ebfae7351cdb29581fdad1d41837a7fb94fcac0d1a9a53ee4b5f69cc51b3629 not found: ID does not exist" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.642374 4780 scope.go:117] "RemoveContainer" containerID="34148739eeef05370a2f9f987ab32cbec201eca9fad402598ae56efaf7b63ca0" Dec 05 07:11:28 crc kubenswrapper[4780]: I1205 07:11:28.668273 4780 scope.go:117] "RemoveContainer" containerID="c64aac9dc2da1feacd133e1cbfed47f07ec40d71d95e0fe650627bb11646f1e3" Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.880372 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.880718 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.881151 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.881182 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.882441 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.883677 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.884979 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:28 crc kubenswrapper[4780]: E1205 07:11:28.885009 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovs-vswitchd" Dec 05 07:11:30 crc kubenswrapper[4780]: I1205 07:11:30.148631 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5356607a-a085-4294-8d0a-22c641259745" path="/var/lib/kubelet/pods/5356607a-a085-4294-8d0a-22c641259745/volumes" Dec 05 07:11:30 crc kubenswrapper[4780]: I1205 07:11:30.149643 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" path="/var/lib/kubelet/pods/e5443f43-c1d5-4563-a28c-63b54fd78ee6/volumes" Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.613807 4780 generic.go:334] "Generic (PLEG): container finished" podID="3782dca9-a617-47a2-9f89-96ba82200899" containerID="21310f93372293bb789a92ae777f6b29d31624531841b6a89f2146486609c159" exitCode=137 Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.614439 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"21310f93372293bb789a92ae777f6b29d31624531841b6a89f2146486609c159"} Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.806296 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 07:11:33 crc kubenswrapper[4780]: E1205 07:11:33.881165 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:33 crc kubenswrapper[4780]: E1205 07:11:33.881773 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:33 crc kubenswrapper[4780]: E1205 07:11:33.882075 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 05 07:11:33 crc kubenswrapper[4780]: E1205 07:11:33.882108 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" Dec 05 07:11:33 crc kubenswrapper[4780]: E1205 07:11:33.883047 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:33 crc kubenswrapper[4780]: E1205 07:11:33.884565 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:33 crc kubenswrapper[4780]: E1205 07:11:33.886039 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 05 07:11:33 crc kubenswrapper[4780]: E1205 07:11:33.886068 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-lq2sf" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovs-vswitchd" Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.951684 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcjmj\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-kube-api-access-pcjmj\") pod \"3782dca9-a617-47a2-9f89-96ba82200899\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.951770 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-cache\") pod \"3782dca9-a617-47a2-9f89-96ba82200899\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.951795 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") pod \"3782dca9-a617-47a2-9f89-96ba82200899\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.951822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-lock\") pod \"3782dca9-a617-47a2-9f89-96ba82200899\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.951845 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"3782dca9-a617-47a2-9f89-96ba82200899\" (UID: \"3782dca9-a617-47a2-9f89-96ba82200899\") " Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.952702 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-cache" (OuterVolumeSpecName: "cache") pod "3782dca9-a617-47a2-9f89-96ba82200899" (UID: "3782dca9-a617-47a2-9f89-96ba82200899"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.952869 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-lock" (OuterVolumeSpecName: "lock") pod "3782dca9-a617-47a2-9f89-96ba82200899" (UID: "3782dca9-a617-47a2-9f89-96ba82200899"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.960085 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-kube-api-access-pcjmj" (OuterVolumeSpecName: "kube-api-access-pcjmj") pod "3782dca9-a617-47a2-9f89-96ba82200899" (UID: "3782dca9-a617-47a2-9f89-96ba82200899"). InnerVolumeSpecName "kube-api-access-pcjmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.960102 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "3782dca9-a617-47a2-9f89-96ba82200899" (UID: "3782dca9-a617-47a2-9f89-96ba82200899"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 07:11:33 crc kubenswrapper[4780]: I1205 07:11:33.960092 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3782dca9-a617-47a2-9f89-96ba82200899" (UID: "3782dca9-a617-47a2-9f89-96ba82200899"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.053993 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcjmj\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-kube-api-access-pcjmj\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.054057 4780 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-cache\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.054078 4780 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3782dca9-a617-47a2-9f89-96ba82200899-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.054094 4780 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3782dca9-a617-47a2-9f89-96ba82200899-lock\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.054132 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.078373 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.155192 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.631018 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3782dca9-a617-47a2-9f89-96ba82200899","Type":"ContainerDied","Data":"1524a52df1ae1b2390ea7df5c95ed11ba26ce692fc8c6eaf90cf7bb6bc40dd8a"} Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.631090 4780 scope.go:117] "RemoveContainer" containerID="21310f93372293bb789a92ae777f6b29d31624531841b6a89f2146486609c159" Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.631181 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.667770 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 05 07:11:34 crc kubenswrapper[4780]: I1205 07:11:34.675479 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Dec 05 07:11:35 crc kubenswrapper[4780]: E1205 07:11:35.275391 4780 configmap.go:193] Couldn't get configMap openstack/ovncontroller-scripts: configmap "ovncontroller-scripts" not found Dec 05 07:11:35 crc kubenswrapper[4780]: E1205 07:11:35.275543 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts podName:52793d91-2b27-4926-9293-78f555401415 nodeName:}" failed. No retries permitted until 2025-12-05 07:12:07.275521595 +0000 UTC m=+1561.345037937 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts") pod "ovn-controller-ovs-lq2sf" (UID: "52793d91-2b27-4926-9293-78f555401415") : configmap "ovncontroller-scripts" not found Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.764477 4780 scope.go:117] "RemoveContainer" containerID="6914c82c6a21ad55bc14f021428bafdc5f53bb59cfc97dcdf06a93af43f76ed4" Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.789262 4780 scope.go:117] "RemoveContainer" containerID="735e0179ac8e0b6856304c581093683f8810b9d6725ea83df77678572a5f9297" Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.852172 4780 scope.go:117] "RemoveContainer" containerID="f81d963680169349f2f9fb3728fe7259c5c1a6a053a4e334e212812bad3a43e5" Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.872677 4780 scope.go:117] "RemoveContainer" containerID="977b8988c9e4ea255e060358102e1022eac55a01e723c56a5c68e57ee2a94e80" Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.890414 4780 scope.go:117] "RemoveContainer" containerID="960fcbfc395793c247badb8fdde1b5984a271aa278bc9c88ddf14fc26e90bea3" Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.906650 4780 scope.go:117] "RemoveContainer" containerID="81ec73c2cb79b863b687984909f69f4090990583d64f1c7fd7e543c07d0c1a61" Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.924763 4780 scope.go:117] "RemoveContainer" containerID="f9700fc7cbaed727d4ba60770c2a1911c7899565ee6e7549689b2e0350e69b87" Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.941510 4780 scope.go:117] "RemoveContainer" containerID="9225d4a3e6c1922ea527f674b2f5bde35d6a0f6a680ffb6e130f1dab6447551d" Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.960055 4780 scope.go:117] "RemoveContainer" containerID="82189ab9e1551dc0f5140613417ba953bda692d106179c487635cae012edccd5" Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.980827 4780 scope.go:117] "RemoveContainer" containerID="cf24f362016fc9a9b24061e240eb546322a73d461803e71a142f2ecfcd0d8c78" Dec 05 07:11:35 crc kubenswrapper[4780]: I1205 07:11:35.999379 4780 scope.go:117] "RemoveContainer" containerID="01c18b876687d6b2d2dd2ebf76cfbc99348debda854332c91ee5bc6b8029eb69" Dec 05 07:11:36 crc kubenswrapper[4780]: I1205 07:11:36.019859 4780 scope.go:117] "RemoveContainer" containerID="986982b89c1a073d0278b92d9a5c06cd37c1b46a70abd7b84b4c71b882785ce2" Dec 05 07:11:36 crc kubenswrapper[4780]: I1205 07:11:36.040941 4780 scope.go:117] "RemoveContainer" containerID="2277d9a62f83380fb829fd9437023d5d7a6a251cbed820feb5e9eaa847bd436b" Dec 05 07:11:36 crc kubenswrapper[4780]: I1205 07:11:36.061510 4780 scope.go:117] "RemoveContainer" containerID="197ec1cb2e42b7eee2a07e5abda174f729b39f1a30d1ee19cdf7fc349964c7dc" Dec 05 07:11:36 crc kubenswrapper[4780]: I1205 07:11:36.146970 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3782dca9-a617-47a2-9f89-96ba82200899" path="/var/lib/kubelet/pods/3782dca9-a617-47a2-9f89-96ba82200899/volumes" Dec 05 07:11:36 crc kubenswrapper[4780]: I1205 07:11:36.655905 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lq2sf_52793d91-2b27-4926-9293-78f555401415/ovs-vswitchd/0.log" Dec 05 07:11:36 crc kubenswrapper[4780]: I1205 07:11:36.657217 4780 generic.go:334] "Generic (PLEG): container finished" podID="52793d91-2b27-4926-9293-78f555401415" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" exitCode=137 Dec 05 07:11:36 crc kubenswrapper[4780]: I1205 07:11:36.657252 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lq2sf" event={"ID":"52793d91-2b27-4926-9293-78f555401415","Type":"ContainerDied","Data":"5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458"} Dec 05 07:11:36 crc kubenswrapper[4780]: I1205 07:11:36.836497 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lq2sf_52793d91-2b27-4926-9293-78f555401415/ovs-vswitchd/0.log" Dec 05 07:11:36 crc kubenswrapper[4780]: I1205 07:11:36.837254 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.002914 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw6v7\" (UniqueName: \"kubernetes.io/projected/52793d91-2b27-4926-9293-78f555401415-kube-api-access-qw6v7\") pod \"52793d91-2b27-4926-9293-78f555401415\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.002971 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-etc-ovs\") pod \"52793d91-2b27-4926-9293-78f555401415\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003009 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-lib\") pod \"52793d91-2b27-4926-9293-78f555401415\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003055 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-run\") pod \"52793d91-2b27-4926-9293-78f555401415\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003079 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-log\") pod \"52793d91-2b27-4926-9293-78f555401415\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003091 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-lib" (OuterVolumeSpecName: "var-lib") pod "52793d91-2b27-4926-9293-78f555401415" (UID: "52793d91-2b27-4926-9293-78f555401415"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003121 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "52793d91-2b27-4926-9293-78f555401415" (UID: "52793d91-2b27-4926-9293-78f555401415"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003159 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-log" (OuterVolumeSpecName: "var-log") pod "52793d91-2b27-4926-9293-78f555401415" (UID: "52793d91-2b27-4926-9293-78f555401415"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003107 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-run" (OuterVolumeSpecName: "var-run") pod "52793d91-2b27-4926-9293-78f555401415" (UID: "52793d91-2b27-4926-9293-78f555401415"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003244 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts\") pod \"52793d91-2b27-4926-9293-78f555401415\" (UID: \"52793d91-2b27-4926-9293-78f555401415\") " Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003682 4780 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-run\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003706 4780 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-log\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003718 4780 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-etc-ovs\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.003729 4780 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/52793d91-2b27-4926-9293-78f555401415-var-lib\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.004674 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts" (OuterVolumeSpecName: "scripts") pod "52793d91-2b27-4926-9293-78f555401415" (UID: "52793d91-2b27-4926-9293-78f555401415"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.009852 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52793d91-2b27-4926-9293-78f555401415-kube-api-access-qw6v7" (OuterVolumeSpecName: "kube-api-access-qw6v7") pod "52793d91-2b27-4926-9293-78f555401415" (UID: "52793d91-2b27-4926-9293-78f555401415"). InnerVolumeSpecName "kube-api-access-qw6v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.104935 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw6v7\" (UniqueName: \"kubernetes.io/projected/52793d91-2b27-4926-9293-78f555401415-kube-api-access-qw6v7\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.104971 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52793d91-2b27-4926-9293-78f555401415-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.546833 4780 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod828f916b-54ac-4498-b1a7-139334944d9b"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod828f916b-54ac-4498-b1a7-139334944d9b] : Timed out while waiting for systemd to remove kubepods-besteffort-pod828f916b_54ac_4498_b1a7_139334944d9b.slice" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.555265 4780 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod65736cb4-25b2-402e-8dfe-d00b218a274b"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod65736cb4-25b2-402e-8dfe-d00b218a274b] : Timed out while waiting for systemd to remove kubepods-besteffort-pod65736cb4_25b2_402e_8dfe_d00b218a274b.slice" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.669179 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lq2sf_52793d91-2b27-4926-9293-78f555401415/ovs-vswitchd/0.log" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.670125 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lq2sf" event={"ID":"52793d91-2b27-4926-9293-78f555401415","Type":"ContainerDied","Data":"19d8f8b6375855a74dc734369454053df27d780be4f7d6aa48ec7e7253bb23d8"} Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.670180 4780 scope.go:117] "RemoveContainer" containerID="5a2fb23a1a05a8dcd9375f4e7b4ec47cd4cd3483f2fb82ed4eddd89274ebd458" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.670191 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lq2sf" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.675034 4780 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2fb4032b-ac6a-46ea-b301-500bf63d3518"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2fb4032b-ac6a-46ea-b301-500bf63d3518] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2fb4032b_ac6a_46ea_b301_500bf63d3518.slice" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.682860 4780 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podb7f1d4f8-b32f-4448-8db1-ff7299256169"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podb7f1d4f8-b32f-4448-8db1-ff7299256169] : Timed out while waiting for systemd to remove kubepods-besteffort-podb7f1d4f8_b32f_4448_8db1_ff7299256169.slice" Dec 05 07:11:37 crc kubenswrapper[4780]: E1205 07:11:37.682991 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podb7f1d4f8-b32f-4448-8db1-ff7299256169] : unable to destroy cgroup paths for cgroup [kubepods besteffort podb7f1d4f8-b32f-4448-8db1-ff7299256169] : Timed out while waiting for systemd to remove kubepods-besteffort-podb7f1d4f8_b32f_4448_8db1_ff7299256169.slice" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" podUID="b7f1d4f8-b32f-4448-8db1-ff7299256169" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.706004 4780 scope.go:117] "RemoveContainer" containerID="7c3e497db975b0e50d18ae29ec41d8e0c664d81c2599e76f333c3c81ae1a98dd" Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.715460 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-lq2sf"] Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.720213 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-lq2sf"] Dec 05 07:11:37 crc kubenswrapper[4780]: I1205 07:11:37.733694 4780 scope.go:117] "RemoveContainer" containerID="22afb8fe24ad1a2948524fdf54fc7bd5c9280a7319d822d04933116bfabbb09b" Dec 05 07:11:38 crc kubenswrapper[4780]: I1205 07:11:38.146713 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52793d91-2b27-4926-9293-78f555401415" path="/var/lib/kubelet/pods/52793d91-2b27-4926-9293-78f555401415/volumes" Dec 05 07:11:38 crc kubenswrapper[4780]: I1205 07:11:38.678957 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9fbbf6f7-sgn25" Dec 05 07:11:38 crc kubenswrapper[4780]: I1205 07:11:38.721261 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-sgn25"] Dec 05 07:11:38 crc kubenswrapper[4780]: I1205 07:11:38.728617 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9fbbf6f7-sgn25"] Dec 05 07:11:40 crc kubenswrapper[4780]: I1205 07:11:40.150655 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7f1d4f8-b32f-4448-8db1-ff7299256169" path="/var/lib/kubelet/pods/b7f1d4f8-b32f-4448-8db1-ff7299256169/volumes" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.311280 4780 scope.go:117] "RemoveContainer" containerID="530ca0e5e1babec81694e9c8b93e6ef4428014a40ae28b2534d753e93c90ee0d" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.333203 4780 scope.go:117] "RemoveContainer" containerID="0628596ee4bfc90e900abe0430b5bf28210f42e1675a435b855a7d6fae706283" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.352076 4780 scope.go:117] "RemoveContainer" containerID="f5a19d596fbaf2b4a17da0bcb1a9fbc8f4f9e6d4fb4526f80df2d3280f9de5d0" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.380752 4780 scope.go:117] "RemoveContainer" containerID="ccb2a5c95c26393b4413ed29ebb1eb9b8c87958c1e4deadeac536f976bfdca9e" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.406422 4780 scope.go:117] "RemoveContainer" containerID="9654c7269b622680dcb56608c38fc0a232f404664727ed94cb9dd7668100f74a" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.427439 4780 scope.go:117] "RemoveContainer" containerID="c0c14a6a851b055628b01b03335bcd98a85ff1861eeb6688a554462156bea99f" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.445978 4780 scope.go:117] "RemoveContainer" containerID="edf3bc8a8d63ed0f663b4a238ab5c8946207e1f70506209b7613ac8e39b9757a" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.471925 4780 scope.go:117] "RemoveContainer" containerID="5450b625e2fd6628a65ff330106c052fa609d51529eba7c91d50eb2a2c2bfed0" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.507827 4780 scope.go:117] "RemoveContainer" containerID="6c8199cb067935af9fd1559a6a1928e847d6217a972643137a70db93d03e5e5e" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.529354 4780 scope.go:117] "RemoveContainer" containerID="70eba769d8771a08270550af12e817a94cd799e9a0ebc5ed8ba18bfd52eca294" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.550008 4780 scope.go:117] "RemoveContainer" containerID="2588118c8f724767c81041c6d0223d7bf9ca8adeb4a04b8f033b9b3ce59c1085" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.566500 4780 scope.go:117] "RemoveContainer" containerID="d19ad9ccad8be34e9adcc6e46df9307e0138b783fd220f548d39c05464deaafc" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.584785 4780 scope.go:117] "RemoveContainer" containerID="ba48e227be37eb4042698d7ca1593b87f7775504552ded5d9b72f3ab116ccd77" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.604071 4780 scope.go:117] "RemoveContainer" containerID="c9a9b3910aba9f6425456a7bf0367e487b88680f7d5707056f6246aa6385429a" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.625626 4780 scope.go:117] "RemoveContainer" containerID="a8e973ca6d8a4a47061fb448fffbaf4efe94cfce4348cf8c675d7acfd4e3fd45" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.652646 4780 scope.go:117] "RemoveContainer" containerID="316425e4a0ec5cf2c114631bb0b139a58858607d1ad2c0a8665ea65f6d08f90f" Dec 05 07:12:21 crc kubenswrapper[4780]: I1205 07:12:21.673688 4780 scope.go:117] "RemoveContainer" containerID="3742f6a072d8632b7e8ed892a5b7bcdb214d48d59cb741bc0ca3f06d7123512e" Dec 05 07:12:59 crc kubenswrapper[4780]: I1205 07:12:59.908418 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:12:59 crc kubenswrapper[4780]: I1205 07:12:59.909002 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:13:21 crc kubenswrapper[4780]: I1205 07:13:21.927632 4780 scope.go:117] "RemoveContainer" containerID="86e03a93a4034922f11da685b33b905b9b9adf31707d92c9b1446d56127e1bb0" Dec 05 07:13:21 crc kubenswrapper[4780]: I1205 07:13:21.979734 4780 scope.go:117] "RemoveContainer" containerID="feabab19310c27909296902614d550ac38609227b665b0437e180098bac35617" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.014804 4780 scope.go:117] "RemoveContainer" containerID="6b826cf4b10deb459d0f861eec1f359220f15430f6daae85fcd2dacae45fa23b" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.037423 4780 scope.go:117] "RemoveContainer" containerID="748cfaf03a9adcb58c631cc8c43a8c70f9648ada77462ecef8c4354ad0cb4038" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.087344 4780 scope.go:117] "RemoveContainer" containerID="f30601955e94c94d570bb7c9b2b21396a74650848a8837a8160a47406b522208" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.108310 4780 scope.go:117] "RemoveContainer" containerID="db7f8ad925a6d723dd40ba017c82e5d901b4fc6f29ab6babf1ed5e72d4d8a760" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.140072 4780 scope.go:117] "RemoveContainer" containerID="484669aad9618352bb527faab8116cfefb67c707e8df7bbf51c11d558fc34090" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.158532 4780 scope.go:117] "RemoveContainer" containerID="e110e2d54c09d0e88283f90c73063aeb81b41224ace45de198d5032e23abeb2e" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.185679 4780 scope.go:117] "RemoveContainer" containerID="cad38a88de1d1c3785a9becf732629324425632ba07e9746939e332ec95a2266" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.219319 4780 scope.go:117] "RemoveContainer" containerID="994f56fa937e545b930e9da333108023bc7033e8db6e0fff54778e83bdef084c" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.250151 4780 scope.go:117] "RemoveContainer" containerID="86cae831a23053daa2cbd1948458e97fdbd3340f44657f9ce62799dfac4d38b1" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.265988 4780 scope.go:117] "RemoveContainer" containerID="6e7ced7781b6a5e33d2240103aeec9a5ca2b777888c9cdb8cf8f640a4e31d624" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.283111 4780 scope.go:117] "RemoveContainer" containerID="8ef8574b8ff939edd05fbd221ac45e6aba76f762b7fac2adf4514b0ac08a5360" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.303025 4780 scope.go:117] "RemoveContainer" containerID="17c12e148aee39d277dd21f751e37c01d7142872900b50c7990ad1ae85ded518" Dec 05 07:13:22 crc kubenswrapper[4780]: I1205 07:13:22.344815 4780 scope.go:117] "RemoveContainer" containerID="c430401df565c59426a009b5d9663962ceeacfa3becd8357efdae4e100ab7a21" Dec 05 07:13:29 crc kubenswrapper[4780]: I1205 07:13:29.908072 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:13:29 crc kubenswrapper[4780]: I1205 07:13:29.909299 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:13:59 crc kubenswrapper[4780]: I1205 07:13:59.908091 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:13:59 crc kubenswrapper[4780]: I1205 07:13:59.908664 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:13:59 crc kubenswrapper[4780]: I1205 07:13:59.908717 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:13:59 crc kubenswrapper[4780]: I1205 07:13:59.909397 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:13:59 crc kubenswrapper[4780]: I1205 07:13:59.909452 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" gracePeriod=600 Dec 05 07:14:00 crc kubenswrapper[4780]: E1205 07:14:00.034254 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:14:00 crc kubenswrapper[4780]: I1205 07:14:00.908320 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" exitCode=0 Dec 05 07:14:00 crc kubenswrapper[4780]: I1205 07:14:00.908381 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a"} Dec 05 07:14:00 crc kubenswrapper[4780]: I1205 07:14:00.908429 4780 scope.go:117] "RemoveContainer" containerID="d9dc2d92a1d6ba1ee75bf54b5eb7456372ba33add1817df5a2c1354bbca5e757" Dec 05 07:14:00 crc kubenswrapper[4780]: I1205 07:14:00.908906 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:14:00 crc kubenswrapper[4780]: E1205 07:14:00.909115 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:14:15 crc kubenswrapper[4780]: I1205 07:14:15.139154 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:14:15 crc kubenswrapper[4780]: E1205 07:14:15.140389 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.583697 4780 scope.go:117] "RemoveContainer" containerID="735e992792dfcfc2d0ddf77ecf8c63baaf98681ce5ab2258cb6dfd4d6bebaf8e" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.605293 4780 scope.go:117] "RemoveContainer" containerID="d200411146961a40d5c61aaae124b891efa425fbc8abe655520e6b1f080b9824" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.627910 4780 scope.go:117] "RemoveContainer" containerID="4e55d512af9e461881a0ca014139b9f0bc12396a8a4f07b2d40fc4f1f7ca7d8d" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.658758 4780 scope.go:117] "RemoveContainer" containerID="f85ea07834ee61f621436527d18fcaee12cce7479f24d2ad60f921417162105f" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.685763 4780 scope.go:117] "RemoveContainer" containerID="a2dee3018e38265f1fab81663bc435e93805201a6800848b3cb8d8282d2f7c3a" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.710065 4780 scope.go:117] "RemoveContainer" containerID="8fffeffc9dd667b68b1d3d2ab1c350a63ba46bb0fae6eb9e60e37596aec7de8e" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.726155 4780 scope.go:117] "RemoveContainer" containerID="0eb1f9f781814534359ecc748e52c6e4547659a97d7852a9bc35e6e85c9c72d4" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.752480 4780 scope.go:117] "RemoveContainer" containerID="1149f20bd04bc2a2bf513a262f145e4eb15d251702407c87dc7d603d89e3e28d" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.790246 4780 scope.go:117] "RemoveContainer" containerID="54efef79e6df78f9c7a79be7c0902ee44a3970e79099cd25bf9047386200ff4c" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.809752 4780 scope.go:117] "RemoveContainer" containerID="b7d4a3dac21d90122fe88d5308d7939f24f6b2475dc30bbbf81f06bd4930e1a3" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.827043 4780 scope.go:117] "RemoveContainer" containerID="d247be1b147a98f7d05a4bb3c8635747189f02eca874ffceb138264c83747cc4" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.851613 4780 scope.go:117] "RemoveContainer" containerID="ad68d542037ed1614224516474d2e4ef6e33875d9bce1dcf46e5f3cb65e27b0a" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.871089 4780 scope.go:117] "RemoveContainer" containerID="bbf7ba30828f7305d2c91dd07104e5ee99cdcba79c89362c856ebc2c639710e1" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.890751 4780 scope.go:117] "RemoveContainer" containerID="fef6e722fddda372925f1a55f421f2982a68362d125bb6bf864ed9052243417f" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.909800 4780 scope.go:117] "RemoveContainer" containerID="2d699037508ccbdd7d61092346751742b0b0f36d49068e0aec3a8c41238695cd" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.924895 4780 scope.go:117] "RemoveContainer" containerID="fa0a6343d445a98183bd0e28c4205f4ee3dbabc1af80c9794439de122f2d4f70" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.939918 4780 scope.go:117] "RemoveContainer" containerID="1f72197d67bb45b009e4fc63d14efd6e5634ae9d06e9c8d83b9c4a8b9a6be45a" Dec 05 07:14:22 crc kubenswrapper[4780]: I1205 07:14:22.955321 4780 scope.go:117] "RemoveContainer" containerID="21cb52d533dbe56f4988844a69a64aaf8e041956d1ff9074d70672e4e95db8ee" Dec 05 07:14:30 crc kubenswrapper[4780]: I1205 07:14:30.138512 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:14:30 crc kubenswrapper[4780]: E1205 07:14:30.139313 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:14:41 crc kubenswrapper[4780]: I1205 07:14:41.138834 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:14:41 crc kubenswrapper[4780]: E1205 07:14:41.139560 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:14:52 crc kubenswrapper[4780]: I1205 07:14:52.138722 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:14:52 crc kubenswrapper[4780]: E1205 07:14:52.139505 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.165770 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g"] Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166553 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621ea4dd-7bc5-4404-9369-1cd99335155d" containerName="galera" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166577 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="621ea4dd-7bc5-4404-9369-1cd99335155d" containerName="galera" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166614 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" containerName="barbican-keystone-listener" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166622 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" containerName="barbican-keystone-listener" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166633 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c542de0-85ab-43f2-89ca-fb8a6c19e49d" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166640 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c542de0-85ab-43f2-89ca-fb8a6c19e49d" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166654 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-updater" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166660 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-updater" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166668 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerName="neutron-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166673 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerName="neutron-api" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166683 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166691 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api-log" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166700 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerName="cinder-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166707 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerName="cinder-api" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166720 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerName="nova-api-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166730 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerName="nova-api-api" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166739 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="ceilometer-central-agent" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166747 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="ceilometer-central-agent" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166765 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-replicator" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166772 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-replicator" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166785 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-auditor" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166791 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-auditor" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166803 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202ef989-0cbf-4120-8621-11201cfe3d64" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166808 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="202ef989-0cbf-4120-8621-11201cfe3d64" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166818 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166823 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-server" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166833 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c681b8-252b-4d1a-8293-27528bc83ed8" containerName="glance-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166838 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c681b8-252b-4d1a-8293-27528bc83ed8" containerName="glance-log" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166848 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c" containerName="nova-cell1-conductor-conductor" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166857 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c" containerName="nova-cell1-conductor-conductor" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166867 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-replicator" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166873 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-replicator" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166901 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166908 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-server" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166922 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" containerName="setup-container" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166927 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" containerName="setup-container" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166935 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="rsync" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166942 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="rsync" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166952 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerName="proxy-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166958 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerName="proxy-server" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166969 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9395104-b579-44d5-bbf0-69fe4d17406d" containerName="probe" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166975 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9395104-b579-44d5-bbf0-69fe4d17406d" containerName="probe" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.166987 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" containerName="rabbitmq" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.166993 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" containerName="rabbitmq" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167002 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-reaper" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167007 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-reaper" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167014 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" containerName="placement-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167019 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" containerName="placement-api" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167025 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167032 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-log" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167039 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a294e09-ff41-4fcc-81f4-2a674c77c239" containerName="glance-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167044 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a294e09-ff41-4fcc-81f4-2a674c77c239" containerName="glance-log" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167057 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167065 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-server" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167081 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" containerName="mysql-bootstrap" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167089 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" containerName="mysql-bootstrap" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167099 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167106 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167112 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a294e09-ff41-4fcc-81f4-2a674c77c239" containerName="glance-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167118 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a294e09-ff41-4fcc-81f4-2a674c77c239" containerName="glance-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167124 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9395104-b579-44d5-bbf0-69fe4d17406d" containerName="cinder-scheduler" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167133 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9395104-b579-44d5-bbf0-69fe4d17406d" containerName="cinder-scheduler" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167141 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167149 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167162 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb4032b-ac6a-46ea-b301-500bf63d3518" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167167 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb4032b-ac6a-46ea-b301-500bf63d3518" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167177 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" containerName="barbican-keystone-listener-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167182 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" containerName="barbican-keystone-listener-log" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167196 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" containerName="galera" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167201 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" containerName="galera" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167211 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5032d09-8298-4941-8b4b-0f24a57b8ced" containerName="setup-container" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167217 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5032d09-8298-4941-8b4b-0f24a57b8ced" containerName="setup-container" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167225 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-updater" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167230 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-updater" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167238 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" containerName="barbican-worker" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167246 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" containerName="barbican-worker" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167256 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828f916b-54ac-4498-b1a7-139334944d9b" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167261 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="828f916b-54ac-4498-b1a7-139334944d9b" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167272 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574be54a-bbce-4f37-93b1-c9de6f1d0f4e" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167277 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="574be54a-bbce-4f37-93b1-c9de6f1d0f4e" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167284 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-expirer" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167289 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-expirer" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167298 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerName="ovsdbserver-sb" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167303 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerName="ovsdbserver-sb" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167310 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="ceilometer-notification-agent" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167316 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="ceilometer-notification-agent" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167323 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52234708-ef2b-40c7-af1b-61e1890dd674" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167331 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="52234708-ef2b-40c7-af1b-61e1890dd674" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167337 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-metadata" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167342 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-metadata" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167355 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621ea4dd-7bc5-4404-9369-1cd99335155d" containerName="mysql-bootstrap" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167366 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="621ea4dd-7bc5-4404-9369-1cd99335155d" containerName="mysql-bootstrap" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167380 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="swift-recon-cron" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167387 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="swift-recon-cron" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167399 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" containerName="ovsdbserver-nb" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167406 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" containerName="ovsdbserver-nb" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167419 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167427 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167437 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerName="proxy-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167443 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerName="proxy-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167452 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f97591-4528-4ed0-918c-b6de191c452a" containerName="nova-cell0-conductor-conductor" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167459 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f97591-4528-4ed0-918c-b6de191c452a" containerName="nova-cell0-conductor-conductor" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167472 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovs-vswitchd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167478 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovs-vswitchd" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167484 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c32a219-7b72-4302-8cc4-b9f11a672e8d" containerName="memcached" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167490 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c32a219-7b72-4302-8cc4-b9f11a672e8d" containerName="memcached" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167500 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167505 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167513 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72765495-c470-41a5-b5a7-423025bdd6a7" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167520 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="72765495-c470-41a5-b5a7-423025bdd6a7" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167531 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerName="ovn-northd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167537 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerName="ovn-northd" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167544 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerName="neutron-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167549 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerName="neutron-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167558 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f1d4f8-b32f-4448-8db1-ff7299256169" containerName="init" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167564 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f1d4f8-b32f-4448-8db1-ff7299256169" containerName="init" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167576 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-auditor" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167586 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-auditor" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167595 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="proxy-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167602 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="proxy-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167614 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6e1d3b-503e-49c8-8d33-bcaae571525c" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167621 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6e1d3b-503e-49c8-8d33-bcaae571525c" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167629 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-auditor" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167637 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-auditor" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167644 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f1d4f8-b32f-4448-8db1-ff7299256169" containerName="dnsmasq-dns" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167651 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f1d4f8-b32f-4448-8db1-ff7299256169" containerName="dnsmasq-dns" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167659 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72765495-c470-41a5-b5a7-423025bdd6a7" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167666 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="72765495-c470-41a5-b5a7-423025bdd6a7" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167673 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server-init" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167679 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server-init" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167686 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe98bcd-7b01-4246-9879-15ed51cf7a1f" containerName="kube-state-metrics" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167694 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe98bcd-7b01-4246-9879-15ed51cf7a1f" containerName="kube-state-metrics" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167704 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-replicator" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167709 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-replicator" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167717 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574be54a-bbce-4f37-93b1-c9de6f1d0f4e" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167723 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="574be54a-bbce-4f37-93b1-c9de6f1d0f4e" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167730 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5356607a-a085-4294-8d0a-22c641259745" containerName="extract-utilities" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167735 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5356607a-a085-4294-8d0a-22c641259745" containerName="extract-utilities" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167742 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167751 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167757 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8bb2be-991d-4cb3-b3b9-9175c78019d9" containerName="keystone-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167775 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8bb2be-991d-4cb3-b3b9-9175c78019d9" containerName="keystone-api" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167786 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5032d09-8298-4941-8b4b-0f24a57b8ced" containerName="rabbitmq" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167793 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5032d09-8298-4941-8b4b-0f24a57b8ced" containerName="rabbitmq" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167799 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" containerName="placement-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167805 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" containerName="placement-log" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167813 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" containerName="ovn-controller" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167821 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" containerName="ovn-controller" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167828 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerName="cinder-api-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167834 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerName="cinder-api-log" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167843 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5356607a-a085-4294-8d0a-22c641259745" containerName="extract-content" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167849 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5356607a-a085-4294-8d0a-22c641259745" containerName="extract-content" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167857 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c681b8-252b-4d1a-8293-27528bc83ed8" containerName="glance-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167864 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c681b8-252b-4d1a-8293-27528bc83ed8" containerName="glance-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167870 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="sg-core" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167876 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="sg-core" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167910 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerName="nova-api-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167919 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerName="nova-api-log" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167927 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6e1d3b-503e-49c8-8d33-bcaae571525c" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167936 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6e1d3b-503e-49c8-8d33-bcaae571525c" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167950 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" containerName="barbican-worker-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167957 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" containerName="barbican-worker-log" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167966 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33af7252-1228-4051-bab0-cfcaee04fe1d" containerName="nova-scheduler-scheduler" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167973 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="33af7252-1228-4051-bab0-cfcaee04fe1d" containerName="nova-scheduler-scheduler" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167984 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c269c975-543e-44e0-ac7a-abf3f7a619dd" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.167990 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c269c975-543e-44e0-ac7a-abf3f7a619dd" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: E1205 07:15:00.167997 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5356607a-a085-4294-8d0a-22c641259745" containerName="registry-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168003 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5356607a-a085-4294-8d0a-22c641259745" containerName="registry-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168203 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" containerName="placement-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168222 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="828f916b-54ac-4498-b1a7-139334944d9b" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168232 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168241 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168248 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-auditor" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168254 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="72765495-c470-41a5-b5a7-423025bdd6a7" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168262 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerName="nova-api-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168272 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6e1d3b-503e-49c8-8d33-bcaae571525c" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168280 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerName="cinder-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168289 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="ceilometer-central-agent" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168304 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-expirer" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168322 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="72765495-c470-41a5-b5a7-423025bdd6a7" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168337 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6efd4f-660c-44e1-bf69-8b1cec6a6e85" containerName="rabbitmq" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168349 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb4032b-ac6a-46ea-b301-500bf63d3518" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168359 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-auditor" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168371 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovs-vswitchd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168381 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ebc417-5adb-4ac6-9b5c-6f065fc4afe0" containerName="ovn-controller" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168390 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c681b8-252b-4d1a-8293-27528bc83ed8" containerName="glance-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168401 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="rsync" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168410 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerName="ovsdbserver-sb" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168417 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerName="proxy-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168424 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="33af7252-1228-4051-bab0-cfcaee04fe1d" containerName="nova-scheduler-scheduler" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168432 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-updater" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168439 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" containerName="barbican-worker" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168445 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="574be54a-bbce-4f37-93b1-c9de6f1d0f4e" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168454 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f8b72a-b08b-4c2f-98dc-242016b6f846" containerName="nova-api-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168465 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-reaper" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168479 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5032d09-8298-4941-8b4b-0f24a57b8ced" containerName="rabbitmq" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168493 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168507 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="sg-core" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168516 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8bb2be-991d-4cb3-b3b9-9175c78019d9" containerName="keystone-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168528 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c542de0-85ab-43f2-89ca-fb8a6c19e49d" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168535 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" containerName="barbican-keystone-listener" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168547 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168560 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c681b8-252b-4d1a-8293-27528bc83ed8" containerName="glance-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168569 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-replicator" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168579 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168587 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="52234708-ef2b-40c7-af1b-61e1890dd674" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168595 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe98bcd-7b01-4246-9879-15ed51cf7a1f" containerName="kube-state-metrics" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168648 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="202ef989-0cbf-4120-8621-11201cfe3d64" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168658 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffce971d-fa60-450d-a347-29ba2a9c9c84" containerName="ovn-northd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168670 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf87b821-f0c0-41df-a1ee-f2c44a09cc82" containerName="cinder-api-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168682 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="account-auditor" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168691 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c32a219-7b72-4302-8cc4-b9f11a672e8d" containerName="memcached" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168700 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="574be54a-bbce-4f37-93b1-c9de6f1d0f4e" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168712 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c269c975-543e-44e0-ac7a-abf3f7a619dd" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168721 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9c218c-8cf4-468d-a946-bb14fc0024b0" containerName="barbican-worker-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168728 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9395104-b579-44d5-bbf0-69fe4d17406d" containerName="probe" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168741 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="proxy-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168750 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168759 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd70346-51cf-44fc-8cea-48ee35deadb0" containerName="proxy-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168767 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a294e09-ff41-4fcc-81f4-2a674c77c239" containerName="glance-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168773 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="885ecc9e-e70a-4d6e-ab6b-f82e46be61a3" containerName="galera" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168780 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="52793d91-2b27-4926-9293-78f555401415" containerName="ovsdb-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168787 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-replicator" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168795 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168805 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="object-replicator" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168814 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f97591-4528-4ed0-918c-b6de191c452a" containerName="nova-cell0-conductor-conductor" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168824 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9395104-b579-44d5-bbf0-69fe4d17406d" containerName="cinder-scheduler" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168832 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f1d4f8-b32f-4448-8db1-ff7299256169" containerName="dnsmasq-dns" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168841 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="swift-recon-cron" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168849 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerName="neutron-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168859 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b8df94-a979-4c1a-bffd-5f5052f0ad12" containerName="barbican-api" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168867 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a294e09-ff41-4fcc-81f4-2a674c77c239" containerName="glance-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168875 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="202ef989-0cbf-4120-8621-11201cfe3d64" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168917 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5356607a-a085-4294-8d0a-22c641259745" containerName="registry-server" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168927 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c542de0-85ab-43f2-89ca-fb8a6c19e49d" containerName="mariadb-account-delete" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168936 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa86c0d1-d6cb-4566-b4b3-352c690b0a96" containerName="barbican-keystone-listener-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168945 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="621ea4dd-7bc5-4404-9369-1cd99335155d" containerName="galera" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168954 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aca675e-bb76-4588-b998-c26393dd5ab6" containerName="ceilometer-notification-agent" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168964 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dca9-a617-47a2-9f89-96ba82200899" containerName="container-updater" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168971 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c381b4ec-8b36-4a3d-8e07-dbbc3a021f11" containerName="nova-metadata-metadata" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168984 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5443f43-c1d5-4563-a28c-63b54fd78ee6" containerName="neutron-httpd" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.168993 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee336d1-2c89-4ccb-b6ea-69a4697b7a29" containerName="openstack-network-exporter" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.169000 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda6b602-0a2c-4047-94ba-f8cdf4bbcf0c" containerName="nova-cell1-conductor-conductor" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.169010 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa3ab37e-e167-44dd-985c-c8f6b067cfdd" containerName="ovsdbserver-nb" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.169019 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f29c4f-0842-42e5-9a7d-d1c24a8f75c7" containerName="placement-log" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.169660 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.172817 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g"] Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.176988 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.177375 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.362535 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-secret-volume\") pod \"collect-profiles-29415315-m4v7g\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.362701 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-config-volume\") pod \"collect-profiles-29415315-m4v7g\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.362727 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc5pz\" (UniqueName: \"kubernetes.io/projected/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-kube-api-access-dc5pz\") pod \"collect-profiles-29415315-m4v7g\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.463994 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-secret-volume\") pod \"collect-profiles-29415315-m4v7g\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.464093 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-config-volume\") pod \"collect-profiles-29415315-m4v7g\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.464124 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc5pz\" (UniqueName: \"kubernetes.io/projected/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-kube-api-access-dc5pz\") pod \"collect-profiles-29415315-m4v7g\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.465438 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-config-volume\") pod \"collect-profiles-29415315-m4v7g\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.472743 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-secret-volume\") pod \"collect-profiles-29415315-m4v7g\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.480277 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc5pz\" (UniqueName: \"kubernetes.io/projected/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-kube-api-access-dc5pz\") pod \"collect-profiles-29415315-m4v7g\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.496915 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:00 crc kubenswrapper[4780]: I1205 07:15:00.946752 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g"] Dec 05 07:15:01 crc kubenswrapper[4780]: I1205 07:15:01.389205 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" event={"ID":"eb2c6359-8e13-4d63-a8f8-15a24b9a3141","Type":"ContainerStarted","Data":"e7ebb91c55625c71b7cb1095927aed82ac910dc8efd5a7c07eaaa9faaf373d72"} Dec 05 07:15:01 crc kubenswrapper[4780]: I1205 07:15:01.389277 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" event={"ID":"eb2c6359-8e13-4d63-a8f8-15a24b9a3141","Type":"ContainerStarted","Data":"96b5d02f343694cfde97fcfb162b5123bdb589c4d0a28e6454618b3eac06936a"} Dec 05 07:15:01 crc kubenswrapper[4780]: I1205 07:15:01.412574 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" podStartSLOduration=1.412552281 podStartE2EDuration="1.412552281s" podCreationTimestamp="2025-12-05 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 07:15:01.401986017 +0000 UTC m=+1735.471502359" watchObservedRunningTime="2025-12-05 07:15:01.412552281 +0000 UTC m=+1735.482068613" Dec 05 07:15:02 crc kubenswrapper[4780]: I1205 07:15:02.404212 4780 generic.go:334] "Generic (PLEG): container finished" podID="eb2c6359-8e13-4d63-a8f8-15a24b9a3141" containerID="e7ebb91c55625c71b7cb1095927aed82ac910dc8efd5a7c07eaaa9faaf373d72" exitCode=0 Dec 05 07:15:02 crc kubenswrapper[4780]: I1205 07:15:02.404351 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" event={"ID":"eb2c6359-8e13-4d63-a8f8-15a24b9a3141","Type":"ContainerDied","Data":"e7ebb91c55625c71b7cb1095927aed82ac910dc8efd5a7c07eaaa9faaf373d72"} Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.139637 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:15:03 crc kubenswrapper[4780]: E1205 07:15:03.139920 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.659341 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.813485 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-config-volume\") pod \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.813634 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-secret-volume\") pod \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.813662 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc5pz\" (UniqueName: \"kubernetes.io/projected/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-kube-api-access-dc5pz\") pod \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\" (UID: \"eb2c6359-8e13-4d63-a8f8-15a24b9a3141\") " Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.814415 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-config-volume" (OuterVolumeSpecName: "config-volume") pod "eb2c6359-8e13-4d63-a8f8-15a24b9a3141" (UID: "eb2c6359-8e13-4d63-a8f8-15a24b9a3141"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.814742 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.819240 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eb2c6359-8e13-4d63-a8f8-15a24b9a3141" (UID: "eb2c6359-8e13-4d63-a8f8-15a24b9a3141"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.819613 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-kube-api-access-dc5pz" (OuterVolumeSpecName: "kube-api-access-dc5pz") pod "eb2c6359-8e13-4d63-a8f8-15a24b9a3141" (UID: "eb2c6359-8e13-4d63-a8f8-15a24b9a3141"). InnerVolumeSpecName "kube-api-access-dc5pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.916409 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 07:15:03 crc kubenswrapper[4780]: I1205 07:15:03.916487 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc5pz\" (UniqueName: \"kubernetes.io/projected/eb2c6359-8e13-4d63-a8f8-15a24b9a3141-kube-api-access-dc5pz\") on node \"crc\" DevicePath \"\"" Dec 05 07:15:04 crc kubenswrapper[4780]: I1205 07:15:04.424903 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" event={"ID":"eb2c6359-8e13-4d63-a8f8-15a24b9a3141","Type":"ContainerDied","Data":"96b5d02f343694cfde97fcfb162b5123bdb589c4d0a28e6454618b3eac06936a"} Dec 05 07:15:04 crc kubenswrapper[4780]: I1205 07:15:04.424955 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96b5d02f343694cfde97fcfb162b5123bdb589c4d0a28e6454618b3eac06936a" Dec 05 07:15:04 crc kubenswrapper[4780]: I1205 07:15:04.424952 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g" Dec 05 07:15:18 crc kubenswrapper[4780]: I1205 07:15:18.138394 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:15:18 crc kubenswrapper[4780]: E1205 07:15:18.139286 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:15:23 crc kubenswrapper[4780]: I1205 07:15:23.116569 4780 scope.go:117] "RemoveContainer" containerID="d5c8d91a402c1d07e080bbe797caccca3caf869137330315419ea7289bc4522d" Dec 05 07:15:23 crc kubenswrapper[4780]: I1205 07:15:23.140318 4780 scope.go:117] "RemoveContainer" containerID="33698a038aab8dc0dfa876c85ed821b198c81dd1835f1e33bee13fd9c143b083" Dec 05 07:15:23 crc kubenswrapper[4780]: I1205 07:15:23.186353 4780 scope.go:117] "RemoveContainer" containerID="28ccece98c496d594b51ff9c26f60e3f17e74c62271711c3b2b7f2111e13a950" Dec 05 07:15:23 crc kubenswrapper[4780]: I1205 07:15:23.226867 4780 scope.go:117] "RemoveContainer" containerID="d3fff3c70f6ba272225dc97dee1285229035520e67907fdfb1836cedaf1dfa1a" Dec 05 07:15:23 crc kubenswrapper[4780]: I1205 07:15:23.255752 4780 scope.go:117] "RemoveContainer" containerID="3434bde80ccbbb622b689d6302932b3de48696d0bfb3f6dd895bbc7bdccbf874" Dec 05 07:15:33 crc kubenswrapper[4780]: I1205 07:15:33.139115 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:15:33 crc kubenswrapper[4780]: E1205 07:15:33.139864 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:15:45 crc kubenswrapper[4780]: I1205 07:15:45.138649 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:15:45 crc kubenswrapper[4780]: E1205 07:15:45.139445 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:16:00 crc kubenswrapper[4780]: I1205 07:16:00.138386 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:16:00 crc kubenswrapper[4780]: E1205 07:16:00.145564 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:16:14 crc kubenswrapper[4780]: I1205 07:16:14.140105 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:16:14 crc kubenswrapper[4780]: E1205 07:16:14.140787 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:16:23 crc kubenswrapper[4780]: I1205 07:16:23.381618 4780 scope.go:117] "RemoveContainer" containerID="282e85d0f2ea9e2278d2658c562e6fa9b7d5cb1b13122f0ecb2e5ba8c5f54666" Dec 05 07:16:23 crc kubenswrapper[4780]: I1205 07:16:23.407963 4780 scope.go:117] "RemoveContainer" containerID="90c21b5b82ee7130df8d8bcf05d3b02beac2d30b78c61ce4d5952c9cdff4a483" Dec 05 07:16:23 crc kubenswrapper[4780]: I1205 07:16:23.456526 4780 scope.go:117] "RemoveContainer" containerID="6729e14dde9f78be13ae40bdda9e3ae569261b8bcd8c18d065c49f17af80f082" Dec 05 07:16:23 crc kubenswrapper[4780]: I1205 07:16:23.474731 4780 scope.go:117] "RemoveContainer" containerID="a20d6cbd57ed2168f88f8759eb95d421a1f35187316266826d960b469948f1fa" Dec 05 07:16:23 crc kubenswrapper[4780]: I1205 07:16:23.489742 4780 scope.go:117] "RemoveContainer" containerID="bf8403cc2bb192e0c688d9786ea4076ec9cea534cc6c97016719aeffa054e392" Dec 05 07:16:23 crc kubenswrapper[4780]: I1205 07:16:23.507482 4780 scope.go:117] "RemoveContainer" containerID="7b6ccff69e702c06122f20efccc590a8f63a94c28204c42d45a3128606dcedcb" Dec 05 07:16:23 crc kubenswrapper[4780]: I1205 07:16:23.523226 4780 scope.go:117] "RemoveContainer" containerID="3793732da62f19e7a1e8b9f03d99576883cb03ca232244237185b399ee3c2f70" Dec 05 07:16:23 crc kubenswrapper[4780]: I1205 07:16:23.537843 4780 scope.go:117] "RemoveContainer" containerID="16c73fa0f2a5f44cf47ed2c1b9a24fbd61232e5c1ea4917a71340ce36c55db3d" Dec 05 07:16:23 crc kubenswrapper[4780]: I1205 07:16:23.552470 4780 scope.go:117] "RemoveContainer" containerID="b9f87c3eba2fae6410181f1cfccefbdd7a818f8f594f45c68f1af0f1deb78353" Dec 05 07:16:23 crc kubenswrapper[4780]: I1205 07:16:23.565801 4780 scope.go:117] "RemoveContainer" containerID="a959f6bc66c2db1c1600ed04dc5d26591b5e87880b38d5a29268b847e514d376" Dec 05 07:16:26 crc kubenswrapper[4780]: I1205 07:16:26.142310 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:16:26 crc kubenswrapper[4780]: E1205 07:16:26.142752 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:16:38 crc kubenswrapper[4780]: I1205 07:16:38.138623 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:16:38 crc kubenswrapper[4780]: E1205 07:16:38.139468 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:16:50 crc kubenswrapper[4780]: I1205 07:16:50.138899 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:16:50 crc kubenswrapper[4780]: E1205 07:16:50.139625 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:17:05 crc kubenswrapper[4780]: I1205 07:17:05.138600 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:17:05 crc kubenswrapper[4780]: E1205 07:17:05.139434 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:17:16 crc kubenswrapper[4780]: I1205 07:17:16.142980 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:17:16 crc kubenswrapper[4780]: E1205 07:17:16.145290 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:17:30 crc kubenswrapper[4780]: I1205 07:17:30.138931 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:17:30 crc kubenswrapper[4780]: E1205 07:17:30.139644 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:17:42 crc kubenswrapper[4780]: I1205 07:17:42.138261 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:17:42 crc kubenswrapper[4780]: E1205 07:17:42.138974 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.746069 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-75ddw"] Dec 05 07:17:52 crc kubenswrapper[4780]: E1205 07:17:52.746986 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c542de0-85ab-43f2-89ca-fb8a6c19e49d" containerName="mariadb-account-delete" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.747000 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c542de0-85ab-43f2-89ca-fb8a6c19e49d" containerName="mariadb-account-delete" Dec 05 07:17:52 crc kubenswrapper[4780]: E1205 07:17:52.747014 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202ef989-0cbf-4120-8621-11201cfe3d64" containerName="mariadb-account-delete" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.747022 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="202ef989-0cbf-4120-8621-11201cfe3d64" containerName="mariadb-account-delete" Dec 05 07:17:52 crc kubenswrapper[4780]: E1205 07:17:52.747032 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb2c6359-8e13-4d63-a8f8-15a24b9a3141" containerName="collect-profiles" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.747039 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb2c6359-8e13-4d63-a8f8-15a24b9a3141" containerName="collect-profiles" Dec 05 07:17:52 crc kubenswrapper[4780]: E1205 07:17:52.747058 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52234708-ef2b-40c7-af1b-61e1890dd674" containerName="mariadb-account-delete" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.747064 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="52234708-ef2b-40c7-af1b-61e1890dd674" containerName="mariadb-account-delete" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.747208 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6e1d3b-503e-49c8-8d33-bcaae571525c" containerName="mariadb-account-delete" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.747228 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="52234708-ef2b-40c7-af1b-61e1890dd674" containerName="mariadb-account-delete" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.747242 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb2c6359-8e13-4d63-a8f8-15a24b9a3141" containerName="collect-profiles" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.748307 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.753648 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75ddw"] Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.802315 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-utilities\") pod \"redhat-operators-75ddw\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.802397 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-catalog-content\") pod \"redhat-operators-75ddw\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.802449 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9526z\" (UniqueName: \"kubernetes.io/projected/bfc88733-3cb7-43ec-afbf-dec99c4a968b-kube-api-access-9526z\") pod \"redhat-operators-75ddw\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.903782 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9526z\" (UniqueName: \"kubernetes.io/projected/bfc88733-3cb7-43ec-afbf-dec99c4a968b-kube-api-access-9526z\") pod \"redhat-operators-75ddw\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.903893 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-utilities\") pod \"redhat-operators-75ddw\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.903943 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-catalog-content\") pod \"redhat-operators-75ddw\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.904597 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-utilities\") pod \"redhat-operators-75ddw\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.904617 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-catalog-content\") pod \"redhat-operators-75ddw\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:52 crc kubenswrapper[4780]: I1205 07:17:52.923419 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9526z\" (UniqueName: \"kubernetes.io/projected/bfc88733-3cb7-43ec-afbf-dec99c4a968b-kube-api-access-9526z\") pod \"redhat-operators-75ddw\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:53 crc kubenswrapper[4780]: I1205 07:17:53.068415 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:17:53 crc kubenswrapper[4780]: I1205 07:17:53.513577 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75ddw"] Dec 05 07:17:54 crc kubenswrapper[4780]: I1205 07:17:54.196775 4780 generic.go:334] "Generic (PLEG): container finished" podID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerID="2d1612e9bc63820a79c7b75d24d2ae003be1f24eec28b3b55ea42d700f1dec97" exitCode=0 Dec 05 07:17:54 crc kubenswrapper[4780]: I1205 07:17:54.196838 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75ddw" event={"ID":"bfc88733-3cb7-43ec-afbf-dec99c4a968b","Type":"ContainerDied","Data":"2d1612e9bc63820a79c7b75d24d2ae003be1f24eec28b3b55ea42d700f1dec97"} Dec 05 07:17:54 crc kubenswrapper[4780]: I1205 07:17:54.197146 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75ddw" event={"ID":"bfc88733-3cb7-43ec-afbf-dec99c4a968b","Type":"ContainerStarted","Data":"d40ccff423f803a032c1721de0f0968ebfb63eaf8c4db2077c363961950bb14e"} Dec 05 07:17:54 crc kubenswrapper[4780]: I1205 07:17:54.198549 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 07:17:55 crc kubenswrapper[4780]: I1205 07:17:55.138483 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:17:55 crc kubenswrapper[4780]: E1205 07:17:55.139038 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:17:55 crc kubenswrapper[4780]: I1205 07:17:55.205972 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75ddw" event={"ID":"bfc88733-3cb7-43ec-afbf-dec99c4a968b","Type":"ContainerStarted","Data":"f693b90404f756e759beb8f8494b66d36a37c42631b11aec285462bf9dfa3ecb"} Dec 05 07:17:56 crc kubenswrapper[4780]: I1205 07:17:56.216996 4780 generic.go:334] "Generic (PLEG): container finished" podID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerID="f693b90404f756e759beb8f8494b66d36a37c42631b11aec285462bf9dfa3ecb" exitCode=0 Dec 05 07:17:56 crc kubenswrapper[4780]: I1205 07:17:56.217053 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75ddw" event={"ID":"bfc88733-3cb7-43ec-afbf-dec99c4a968b","Type":"ContainerDied","Data":"f693b90404f756e759beb8f8494b66d36a37c42631b11aec285462bf9dfa3ecb"} Dec 05 07:17:57 crc kubenswrapper[4780]: I1205 07:17:57.226253 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75ddw" event={"ID":"bfc88733-3cb7-43ec-afbf-dec99c4a968b","Type":"ContainerStarted","Data":"ed08d36bf54642e8a425c530a61de5e79ba4ceda1f2cf644e65f3c03f0e0304d"} Dec 05 07:17:57 crc kubenswrapper[4780]: I1205 07:17:57.243968 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-75ddw" podStartSLOduration=2.783840152 podStartE2EDuration="5.243951359s" podCreationTimestamp="2025-12-05 07:17:52 +0000 UTC" firstStartedPulling="2025-12-05 07:17:54.198282339 +0000 UTC m=+1908.267798671" lastFinishedPulling="2025-12-05 07:17:56.658393546 +0000 UTC m=+1910.727909878" observedRunningTime="2025-12-05 07:17:57.241095072 +0000 UTC m=+1911.310611414" watchObservedRunningTime="2025-12-05 07:17:57.243951359 +0000 UTC m=+1911.313467691" Dec 05 07:18:03 crc kubenswrapper[4780]: I1205 07:18:03.068674 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:18:03 crc kubenswrapper[4780]: I1205 07:18:03.069453 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:18:03 crc kubenswrapper[4780]: I1205 07:18:03.115007 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:18:03 crc kubenswrapper[4780]: I1205 07:18:03.306394 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:18:03 crc kubenswrapper[4780]: I1205 07:18:03.357819 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-75ddw"] Dec 05 07:18:05 crc kubenswrapper[4780]: I1205 07:18:05.294668 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-75ddw" podUID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerName="registry-server" containerID="cri-o://ed08d36bf54642e8a425c530a61de5e79ba4ceda1f2cf644e65f3c03f0e0304d" gracePeriod=2 Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.139124 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:18:08 crc kubenswrapper[4780]: E1205 07:18:08.139905 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.316658 4780 generic.go:334] "Generic (PLEG): container finished" podID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerID="ed08d36bf54642e8a425c530a61de5e79ba4ceda1f2cf644e65f3c03f0e0304d" exitCode=0 Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.316710 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75ddw" event={"ID":"bfc88733-3cb7-43ec-afbf-dec99c4a968b","Type":"ContainerDied","Data":"ed08d36bf54642e8a425c530a61de5e79ba4ceda1f2cf644e65f3c03f0e0304d"} Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.316741 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75ddw" event={"ID":"bfc88733-3cb7-43ec-afbf-dec99c4a968b","Type":"ContainerDied","Data":"d40ccff423f803a032c1721de0f0968ebfb63eaf8c4db2077c363961950bb14e"} Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.316755 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d40ccff423f803a032c1721de0f0968ebfb63eaf8c4db2077c363961950bb14e" Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.341948 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.516197 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-utilities\") pod \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.516275 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-catalog-content\") pod \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.516301 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9526z\" (UniqueName: \"kubernetes.io/projected/bfc88733-3cb7-43ec-afbf-dec99c4a968b-kube-api-access-9526z\") pod \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\" (UID: \"bfc88733-3cb7-43ec-afbf-dec99c4a968b\") " Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.517191 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-utilities" (OuterVolumeSpecName: "utilities") pod "bfc88733-3cb7-43ec-afbf-dec99c4a968b" (UID: "bfc88733-3cb7-43ec-afbf-dec99c4a968b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.523345 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc88733-3cb7-43ec-afbf-dec99c4a968b-kube-api-access-9526z" (OuterVolumeSpecName: "kube-api-access-9526z") pod "bfc88733-3cb7-43ec-afbf-dec99c4a968b" (UID: "bfc88733-3cb7-43ec-afbf-dec99c4a968b"). InnerVolumeSpecName "kube-api-access-9526z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.618252 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9526z\" (UniqueName: \"kubernetes.io/projected/bfc88733-3cb7-43ec-afbf-dec99c4a968b-kube-api-access-9526z\") on node \"crc\" DevicePath \"\"" Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.618292 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.645186 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfc88733-3cb7-43ec-afbf-dec99c4a968b" (UID: "bfc88733-3cb7-43ec-afbf-dec99c4a968b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:18:08 crc kubenswrapper[4780]: I1205 07:18:08.719589 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc88733-3cb7-43ec-afbf-dec99c4a968b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:18:09 crc kubenswrapper[4780]: I1205 07:18:09.324194 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75ddw" Dec 05 07:18:09 crc kubenswrapper[4780]: I1205 07:18:09.362456 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-75ddw"] Dec 05 07:18:09 crc kubenswrapper[4780]: I1205 07:18:09.367803 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-75ddw"] Dec 05 07:18:10 crc kubenswrapper[4780]: I1205 07:18:10.147308 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" path="/var/lib/kubelet/pods/bfc88733-3cb7-43ec-afbf-dec99c4a968b/volumes" Dec 05 07:18:22 crc kubenswrapper[4780]: I1205 07:18:22.139706 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:18:22 crc kubenswrapper[4780]: E1205 07:18:22.141219 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:18:34 crc kubenswrapper[4780]: I1205 07:18:34.138840 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:18:34 crc kubenswrapper[4780]: E1205 07:18:34.139564 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:18:45 crc kubenswrapper[4780]: I1205 07:18:45.138944 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:18:45 crc kubenswrapper[4780]: E1205 07:18:45.139682 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:18:57 crc kubenswrapper[4780]: I1205 07:18:57.139451 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:18:57 crc kubenswrapper[4780]: E1205 07:18:57.141778 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:19:11 crc kubenswrapper[4780]: I1205 07:19:11.138826 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:19:11 crc kubenswrapper[4780]: I1205 07:19:11.826328 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"422b3aca6aa7b99690985de35a18b98eeb386a21d7488982213818b1a9878ded"} Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.239986 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-whgld"] Dec 05 07:19:46 crc kubenswrapper[4780]: E1205 07:19:46.241309 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerName="registry-server" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.241326 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerName="registry-server" Dec 05 07:19:46 crc kubenswrapper[4780]: E1205 07:19:46.241361 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerName="extract-content" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.241368 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerName="extract-content" Dec 05 07:19:46 crc kubenswrapper[4780]: E1205 07:19:46.241377 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerName="extract-utilities" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.241387 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerName="extract-utilities" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.241772 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc88733-3cb7-43ec-afbf-dec99c4a968b" containerName="registry-server" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.253580 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.290369 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whgld"] Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.323087 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-utilities\") pod \"certified-operators-whgld\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.323622 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh2bv\" (UniqueName: \"kubernetes.io/projected/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-kube-api-access-nh2bv\") pod \"certified-operators-whgld\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.323687 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-catalog-content\") pod \"certified-operators-whgld\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.424552 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-utilities\") pod \"certified-operators-whgld\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.424636 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh2bv\" (UniqueName: \"kubernetes.io/projected/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-kube-api-access-nh2bv\") pod \"certified-operators-whgld\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.424673 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-catalog-content\") pod \"certified-operators-whgld\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.425583 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-catalog-content\") pod \"certified-operators-whgld\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.425668 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-utilities\") pod \"certified-operators-whgld\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.447430 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh2bv\" (UniqueName: \"kubernetes.io/projected/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-kube-api-access-nh2bv\") pod \"certified-operators-whgld\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:46 crc kubenswrapper[4780]: I1205 07:19:46.583802 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:47 crc kubenswrapper[4780]: I1205 07:19:47.071382 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whgld"] Dec 05 07:19:47 crc kubenswrapper[4780]: I1205 07:19:47.111687 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whgld" event={"ID":"ae6f87f7-2d29-48a8-b24d-a9f28357caa0","Type":"ContainerStarted","Data":"f3810bcfc1e2fe167de549b1985f1bfa1b894c860d2e954603f22f60434280f6"} Dec 05 07:19:48 crc kubenswrapper[4780]: I1205 07:19:48.121403 4780 generic.go:334] "Generic (PLEG): container finished" podID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerID="fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a" exitCode=0 Dec 05 07:19:48 crc kubenswrapper[4780]: I1205 07:19:48.121450 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whgld" event={"ID":"ae6f87f7-2d29-48a8-b24d-a9f28357caa0","Type":"ContainerDied","Data":"fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a"} Dec 05 07:19:49 crc kubenswrapper[4780]: I1205 07:19:49.135487 4780 generic.go:334] "Generic (PLEG): container finished" podID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerID="e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92" exitCode=0 Dec 05 07:19:49 crc kubenswrapper[4780]: I1205 07:19:49.135755 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whgld" event={"ID":"ae6f87f7-2d29-48a8-b24d-a9f28357caa0","Type":"ContainerDied","Data":"e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92"} Dec 05 07:19:50 crc kubenswrapper[4780]: I1205 07:19:50.158662 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whgld" event={"ID":"ae6f87f7-2d29-48a8-b24d-a9f28357caa0","Type":"ContainerStarted","Data":"b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9"} Dec 05 07:19:50 crc kubenswrapper[4780]: I1205 07:19:50.183542 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-whgld" podStartSLOduration=2.657551673 podStartE2EDuration="4.183520553s" podCreationTimestamp="2025-12-05 07:19:46 +0000 UTC" firstStartedPulling="2025-12-05 07:19:48.124248907 +0000 UTC m=+2022.193765239" lastFinishedPulling="2025-12-05 07:19:49.650217787 +0000 UTC m=+2023.719734119" observedRunningTime="2025-12-05 07:19:50.170649825 +0000 UTC m=+2024.240166167" watchObservedRunningTime="2025-12-05 07:19:50.183520553 +0000 UTC m=+2024.253036885" Dec 05 07:19:56 crc kubenswrapper[4780]: I1205 07:19:56.584621 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:56 crc kubenswrapper[4780]: I1205 07:19:56.585279 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:56 crc kubenswrapper[4780]: I1205 07:19:56.630272 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:57 crc kubenswrapper[4780]: I1205 07:19:57.249257 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:57 crc kubenswrapper[4780]: I1205 07:19:57.298542 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-whgld"] Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.213673 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-whgld" podUID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerName="registry-server" containerID="cri-o://b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9" gracePeriod=2 Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.632916 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.705330 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-catalog-content\") pod \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.705402 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-utilities\") pod \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.705488 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh2bv\" (UniqueName: \"kubernetes.io/projected/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-kube-api-access-nh2bv\") pod \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\" (UID: \"ae6f87f7-2d29-48a8-b24d-a9f28357caa0\") " Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.706792 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-utilities" (OuterVolumeSpecName: "utilities") pod "ae6f87f7-2d29-48a8-b24d-a9f28357caa0" (UID: "ae6f87f7-2d29-48a8-b24d-a9f28357caa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.718395 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-kube-api-access-nh2bv" (OuterVolumeSpecName: "kube-api-access-nh2bv") pod "ae6f87f7-2d29-48a8-b24d-a9f28357caa0" (UID: "ae6f87f7-2d29-48a8-b24d-a9f28357caa0"). InnerVolumeSpecName "kube-api-access-nh2bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.769359 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae6f87f7-2d29-48a8-b24d-a9f28357caa0" (UID: "ae6f87f7-2d29-48a8-b24d-a9f28357caa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.807311 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.807370 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:19:59 crc kubenswrapper[4780]: I1205 07:19:59.807387 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh2bv\" (UniqueName: \"kubernetes.io/projected/ae6f87f7-2d29-48a8-b24d-a9f28357caa0-kube-api-access-nh2bv\") on node \"crc\" DevicePath \"\"" Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.222856 4780 generic.go:334] "Generic (PLEG): container finished" podID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerID="b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9" exitCode=0 Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.222935 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whgld" Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.222975 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whgld" event={"ID":"ae6f87f7-2d29-48a8-b24d-a9f28357caa0","Type":"ContainerDied","Data":"b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9"} Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.223048 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whgld" event={"ID":"ae6f87f7-2d29-48a8-b24d-a9f28357caa0","Type":"ContainerDied","Data":"f3810bcfc1e2fe167de549b1985f1bfa1b894c860d2e954603f22f60434280f6"} Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.223086 4780 scope.go:117] "RemoveContainer" containerID="b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9" Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.265334 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-whgld"] Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.276000 4780 scope.go:117] "RemoveContainer" containerID="e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92" Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.278503 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-whgld"] Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.306892 4780 scope.go:117] "RemoveContainer" containerID="fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a" Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.343002 4780 scope.go:117] "RemoveContainer" containerID="b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9" Dec 05 07:20:00 crc kubenswrapper[4780]: E1205 07:20:00.343582 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9\": container with ID starting with b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9 not found: ID does not exist" containerID="b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9" Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.343622 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9"} err="failed to get container status \"b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9\": rpc error: code = NotFound desc = could not find container \"b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9\": container with ID starting with b198d9c55390b2c34015b87e348fcc63bbd9c52aa31d304f10ae92e4ede690a9 not found: ID does not exist" Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.343651 4780 scope.go:117] "RemoveContainer" containerID="e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92" Dec 05 07:20:00 crc kubenswrapper[4780]: E1205 07:20:00.344071 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92\": container with ID starting with e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92 not found: ID does not exist" containerID="e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92" Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.344092 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92"} err="failed to get container status \"e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92\": rpc error: code = NotFound desc = could not find container \"e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92\": container with ID starting with e7a8594e2e72f5af88ccecc3f5ce87aba5b1666fc5c171ff269ac0a474a82d92 not found: ID does not exist" Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.344105 4780 scope.go:117] "RemoveContainer" containerID="fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a" Dec 05 07:20:00 crc kubenswrapper[4780]: E1205 07:20:00.344472 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a\": container with ID starting with fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a not found: ID does not exist" containerID="fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a" Dec 05 07:20:00 crc kubenswrapper[4780]: I1205 07:20:00.344545 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a"} err="failed to get container status \"fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a\": rpc error: code = NotFound desc = could not find container \"fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a\": container with ID starting with fd0c808187396f64b6d8fab7b1e708fd399204b46baba7049bc63904c90ea97a not found: ID does not exist" Dec 05 07:20:02 crc kubenswrapper[4780]: I1205 07:20:02.147584 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" path="/var/lib/kubelet/pods/ae6f87f7-2d29-48a8-b24d-a9f28357caa0/volumes" Dec 05 07:20:55 crc kubenswrapper[4780]: I1205 07:20:55.934476 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dxcnp"] Dec 05 07:20:55 crc kubenswrapper[4780]: E1205 07:20:55.935378 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerName="extract-content" Dec 05 07:20:55 crc kubenswrapper[4780]: I1205 07:20:55.935395 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerName="extract-content" Dec 05 07:20:55 crc kubenswrapper[4780]: E1205 07:20:55.935417 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerName="registry-server" Dec 05 07:20:55 crc kubenswrapper[4780]: I1205 07:20:55.935424 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerName="registry-server" Dec 05 07:20:55 crc kubenswrapper[4780]: E1205 07:20:55.935435 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerName="extract-utilities" Dec 05 07:20:55 crc kubenswrapper[4780]: I1205 07:20:55.935442 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerName="extract-utilities" Dec 05 07:20:55 crc kubenswrapper[4780]: I1205 07:20:55.935589 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6f87f7-2d29-48a8-b24d-a9f28357caa0" containerName="registry-server" Dec 05 07:20:55 crc kubenswrapper[4780]: I1205 07:20:55.936583 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:55 crc kubenswrapper[4780]: I1205 07:20:55.954531 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxcnp"] Dec 05 07:20:55 crc kubenswrapper[4780]: I1205 07:20:55.971486 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vsbw\" (UniqueName: \"kubernetes.io/projected/214d684f-5786-4d5c-80e0-197e40dbaee2-kube-api-access-6vsbw\") pod \"redhat-marketplace-dxcnp\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:55 crc kubenswrapper[4780]: I1205 07:20:55.971730 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-catalog-content\") pod \"redhat-marketplace-dxcnp\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:55 crc kubenswrapper[4780]: I1205 07:20:55.971807 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-utilities\") pod \"redhat-marketplace-dxcnp\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:56 crc kubenswrapper[4780]: I1205 07:20:56.073132 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vsbw\" (UniqueName: \"kubernetes.io/projected/214d684f-5786-4d5c-80e0-197e40dbaee2-kube-api-access-6vsbw\") pod \"redhat-marketplace-dxcnp\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:56 crc kubenswrapper[4780]: I1205 07:20:56.073215 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-catalog-content\") pod \"redhat-marketplace-dxcnp\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:56 crc kubenswrapper[4780]: I1205 07:20:56.073246 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-utilities\") pod \"redhat-marketplace-dxcnp\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:56 crc kubenswrapper[4780]: I1205 07:20:56.073755 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-catalog-content\") pod \"redhat-marketplace-dxcnp\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:56 crc kubenswrapper[4780]: I1205 07:20:56.073809 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-utilities\") pod \"redhat-marketplace-dxcnp\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:56 crc kubenswrapper[4780]: I1205 07:20:56.112093 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vsbw\" (UniqueName: \"kubernetes.io/projected/214d684f-5786-4d5c-80e0-197e40dbaee2-kube-api-access-6vsbw\") pod \"redhat-marketplace-dxcnp\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:56 crc kubenswrapper[4780]: I1205 07:20:56.259591 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:20:56 crc kubenswrapper[4780]: I1205 07:20:56.762166 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxcnp"] Dec 05 07:20:57 crc kubenswrapper[4780]: I1205 07:20:57.625424 4780 generic.go:334] "Generic (PLEG): container finished" podID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerID="7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7" exitCode=0 Dec 05 07:20:57 crc kubenswrapper[4780]: I1205 07:20:57.625479 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxcnp" event={"ID":"214d684f-5786-4d5c-80e0-197e40dbaee2","Type":"ContainerDied","Data":"7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7"} Dec 05 07:20:57 crc kubenswrapper[4780]: I1205 07:20:57.625507 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxcnp" event={"ID":"214d684f-5786-4d5c-80e0-197e40dbaee2","Type":"ContainerStarted","Data":"f1bfa4950cdd24e860033653accde8b46048bf2fa0ab7402a42814ed6b6c9e78"} Dec 05 07:20:58 crc kubenswrapper[4780]: I1205 07:20:58.633969 4780 generic.go:334] "Generic (PLEG): container finished" podID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerID="9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c" exitCode=0 Dec 05 07:20:58 crc kubenswrapper[4780]: I1205 07:20:58.634033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxcnp" event={"ID":"214d684f-5786-4d5c-80e0-197e40dbaee2","Type":"ContainerDied","Data":"9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c"} Dec 05 07:20:59 crc kubenswrapper[4780]: I1205 07:20:59.642540 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxcnp" event={"ID":"214d684f-5786-4d5c-80e0-197e40dbaee2","Type":"ContainerStarted","Data":"50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97"} Dec 05 07:20:59 crc kubenswrapper[4780]: I1205 07:20:59.657469 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dxcnp" podStartSLOduration=3.240776195 podStartE2EDuration="4.65744906s" podCreationTimestamp="2025-12-05 07:20:55 +0000 UTC" firstStartedPulling="2025-12-05 07:20:57.627497597 +0000 UTC m=+2091.697013929" lastFinishedPulling="2025-12-05 07:20:59.044170462 +0000 UTC m=+2093.113686794" observedRunningTime="2025-12-05 07:20:59.657221154 +0000 UTC m=+2093.726737486" watchObservedRunningTime="2025-12-05 07:20:59.65744906 +0000 UTC m=+2093.726965392" Dec 05 07:21:06 crc kubenswrapper[4780]: I1205 07:21:06.260001 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:21:06 crc kubenswrapper[4780]: I1205 07:21:06.260647 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:21:06 crc kubenswrapper[4780]: I1205 07:21:06.312807 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:21:06 crc kubenswrapper[4780]: I1205 07:21:06.738701 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:21:06 crc kubenswrapper[4780]: I1205 07:21:06.779872 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxcnp"] Dec 05 07:21:08 crc kubenswrapper[4780]: I1205 07:21:08.699855 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dxcnp" podUID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerName="registry-server" containerID="cri-o://50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97" gracePeriod=2 Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.623804 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.707603 4780 generic.go:334] "Generic (PLEG): container finished" podID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerID="50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97" exitCode=0 Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.707660 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxcnp" event={"ID":"214d684f-5786-4d5c-80e0-197e40dbaee2","Type":"ContainerDied","Data":"50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97"} Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.707737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxcnp" event={"ID":"214d684f-5786-4d5c-80e0-197e40dbaee2","Type":"ContainerDied","Data":"f1bfa4950cdd24e860033653accde8b46048bf2fa0ab7402a42814ed6b6c9e78"} Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.707758 4780 scope.go:117] "RemoveContainer" containerID="50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.707800 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxcnp" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.724546 4780 scope.go:117] "RemoveContainer" containerID="9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.738965 4780 scope.go:117] "RemoveContainer" containerID="7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.762272 4780 scope.go:117] "RemoveContainer" containerID="50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97" Dec 05 07:21:09 crc kubenswrapper[4780]: E1205 07:21:09.762725 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97\": container with ID starting with 50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97 not found: ID does not exist" containerID="50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.762756 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97"} err="failed to get container status \"50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97\": rpc error: code = NotFound desc = could not find container \"50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97\": container with ID starting with 50cdf5835e6a233c20107c0eaf14ab90013e0067a472b8963f6a7e73ee2c3b97 not found: ID does not exist" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.762783 4780 scope.go:117] "RemoveContainer" containerID="9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c" Dec 05 07:21:09 crc kubenswrapper[4780]: E1205 07:21:09.763196 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c\": container with ID starting with 9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c not found: ID does not exist" containerID="9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.763222 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c"} err="failed to get container status \"9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c\": rpc error: code = NotFound desc = could not find container \"9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c\": container with ID starting with 9bbd8c60fa301c3f7a4ad6ceb657268aaec0375acae03f30dff67c9bfd58710c not found: ID does not exist" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.763244 4780 scope.go:117] "RemoveContainer" containerID="7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.763422 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vsbw\" (UniqueName: \"kubernetes.io/projected/214d684f-5786-4d5c-80e0-197e40dbaee2-kube-api-access-6vsbw\") pod \"214d684f-5786-4d5c-80e0-197e40dbaee2\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.763475 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-catalog-content\") pod \"214d684f-5786-4d5c-80e0-197e40dbaee2\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.763520 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-utilities\") pod \"214d684f-5786-4d5c-80e0-197e40dbaee2\" (UID: \"214d684f-5786-4d5c-80e0-197e40dbaee2\") " Dec 05 07:21:09 crc kubenswrapper[4780]: E1205 07:21:09.763623 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7\": container with ID starting with 7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7 not found: ID does not exist" containerID="7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.763654 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7"} err="failed to get container status \"7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7\": rpc error: code = NotFound desc = could not find container \"7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7\": container with ID starting with 7e557d99a440bf221bd7382bd87a7ea7738447056bf9a1d0d893e3b05e07cca7 not found: ID does not exist" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.764671 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-utilities" (OuterVolumeSpecName: "utilities") pod "214d684f-5786-4d5c-80e0-197e40dbaee2" (UID: "214d684f-5786-4d5c-80e0-197e40dbaee2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.769136 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214d684f-5786-4d5c-80e0-197e40dbaee2-kube-api-access-6vsbw" (OuterVolumeSpecName: "kube-api-access-6vsbw") pod "214d684f-5786-4d5c-80e0-197e40dbaee2" (UID: "214d684f-5786-4d5c-80e0-197e40dbaee2"). InnerVolumeSpecName "kube-api-access-6vsbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.787300 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "214d684f-5786-4d5c-80e0-197e40dbaee2" (UID: "214d684f-5786-4d5c-80e0-197e40dbaee2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.864813 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.864996 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vsbw\" (UniqueName: \"kubernetes.io/projected/214d684f-5786-4d5c-80e0-197e40dbaee2-kube-api-access-6vsbw\") on node \"crc\" DevicePath \"\"" Dec 05 07:21:09 crc kubenswrapper[4780]: I1205 07:21:09.865044 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/214d684f-5786-4d5c-80e0-197e40dbaee2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:21:10 crc kubenswrapper[4780]: I1205 07:21:10.034460 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxcnp"] Dec 05 07:21:10 crc kubenswrapper[4780]: I1205 07:21:10.040467 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxcnp"] Dec 05 07:21:10 crc kubenswrapper[4780]: I1205 07:21:10.147277 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214d684f-5786-4d5c-80e0-197e40dbaee2" path="/var/lib/kubelet/pods/214d684f-5786-4d5c-80e0-197e40dbaee2/volumes" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.094223 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lngjb"] Dec 05 07:21:25 crc kubenswrapper[4780]: E1205 07:21:25.095138 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerName="extract-utilities" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.095150 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerName="extract-utilities" Dec 05 07:21:25 crc kubenswrapper[4780]: E1205 07:21:25.095180 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerName="extract-content" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.095187 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerName="extract-content" Dec 05 07:21:25 crc kubenswrapper[4780]: E1205 07:21:25.095200 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerName="registry-server" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.095207 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerName="registry-server" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.095338 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="214d684f-5786-4d5c-80e0-197e40dbaee2" containerName="registry-server" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.096358 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.113877 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lngjb"] Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.180209 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggzzf\" (UniqueName: \"kubernetes.io/projected/184cf114-854b-4bb9-9ff3-35aa1715027a-kube-api-access-ggzzf\") pod \"community-operators-lngjb\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.180285 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-utilities\") pod \"community-operators-lngjb\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.180497 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-catalog-content\") pod \"community-operators-lngjb\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.281915 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-catalog-content\") pod \"community-operators-lngjb\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.282004 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggzzf\" (UniqueName: \"kubernetes.io/projected/184cf114-854b-4bb9-9ff3-35aa1715027a-kube-api-access-ggzzf\") pod \"community-operators-lngjb\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.282026 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-utilities\") pod \"community-operators-lngjb\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.282524 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-catalog-content\") pod \"community-operators-lngjb\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.282540 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-utilities\") pod \"community-operators-lngjb\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.301600 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggzzf\" (UniqueName: \"kubernetes.io/projected/184cf114-854b-4bb9-9ff3-35aa1715027a-kube-api-access-ggzzf\") pod \"community-operators-lngjb\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.416693 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:25 crc kubenswrapper[4780]: I1205 07:21:25.875636 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lngjb"] Dec 05 07:21:26 crc kubenswrapper[4780]: I1205 07:21:26.822111 4780 generic.go:334] "Generic (PLEG): container finished" podID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerID="28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944" exitCode=0 Dec 05 07:21:26 crc kubenswrapper[4780]: I1205 07:21:26.822427 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lngjb" event={"ID":"184cf114-854b-4bb9-9ff3-35aa1715027a","Type":"ContainerDied","Data":"28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944"} Dec 05 07:21:26 crc kubenswrapper[4780]: I1205 07:21:26.822456 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lngjb" event={"ID":"184cf114-854b-4bb9-9ff3-35aa1715027a","Type":"ContainerStarted","Data":"e3afebb57172d1081bbd07418410c7fda868b99429f9f7eeb276af09bd1263f2"} Dec 05 07:21:29 crc kubenswrapper[4780]: I1205 07:21:29.907500 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:21:29 crc kubenswrapper[4780]: I1205 07:21:29.907955 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:21:30 crc kubenswrapper[4780]: I1205 07:21:30.852129 4780 generic.go:334] "Generic (PLEG): container finished" podID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerID="1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042" exitCode=0 Dec 05 07:21:30 crc kubenswrapper[4780]: I1205 07:21:30.852227 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lngjb" event={"ID":"184cf114-854b-4bb9-9ff3-35aa1715027a","Type":"ContainerDied","Data":"1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042"} Dec 05 07:21:31 crc kubenswrapper[4780]: I1205 07:21:31.862199 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lngjb" event={"ID":"184cf114-854b-4bb9-9ff3-35aa1715027a","Type":"ContainerStarted","Data":"98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0"} Dec 05 07:21:31 crc kubenswrapper[4780]: I1205 07:21:31.883954 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lngjb" podStartSLOduration=2.468905931 podStartE2EDuration="6.883931916s" podCreationTimestamp="2025-12-05 07:21:25 +0000 UTC" firstStartedPulling="2025-12-05 07:21:26.824407938 +0000 UTC m=+2120.893924270" lastFinishedPulling="2025-12-05 07:21:31.239433923 +0000 UTC m=+2125.308950255" observedRunningTime="2025-12-05 07:21:31.883350779 +0000 UTC m=+2125.952867111" watchObservedRunningTime="2025-12-05 07:21:31.883931916 +0000 UTC m=+2125.953448248" Dec 05 07:21:35 crc kubenswrapper[4780]: I1205 07:21:35.417291 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:35 crc kubenswrapper[4780]: I1205 07:21:35.418674 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:35 crc kubenswrapper[4780]: I1205 07:21:35.467730 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:36 crc kubenswrapper[4780]: I1205 07:21:36.935516 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lngjb" Dec 05 07:21:37 crc kubenswrapper[4780]: I1205 07:21:37.005600 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lngjb"] Dec 05 07:21:37 crc kubenswrapper[4780]: I1205 07:21:37.065396 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5hp5n"] Dec 05 07:21:37 crc kubenswrapper[4780]: I1205 07:21:37.065924 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5hp5n" podUID="20376915-18d2-4c02-bfdd-eede7902927c" containerName="registry-server" containerID="cri-o://6e29c2b9fac1ed7c781190d522486cd9067231353c9f62fd652508a251250a37" gracePeriod=2 Dec 05 07:21:37 crc kubenswrapper[4780]: I1205 07:21:37.902826 4780 generic.go:334] "Generic (PLEG): container finished" podID="20376915-18d2-4c02-bfdd-eede7902927c" containerID="6e29c2b9fac1ed7c781190d522486cd9067231353c9f62fd652508a251250a37" exitCode=0 Dec 05 07:21:37 crc kubenswrapper[4780]: I1205 07:21:37.903617 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp5n" event={"ID":"20376915-18d2-4c02-bfdd-eede7902927c","Type":"ContainerDied","Data":"6e29c2b9fac1ed7c781190d522486cd9067231353c9f62fd652508a251250a37"} Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.610167 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hp5n" Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.765787 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-catalog-content\") pod \"20376915-18d2-4c02-bfdd-eede7902927c\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.765851 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-utilities\") pod \"20376915-18d2-4c02-bfdd-eede7902927c\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.765991 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flvpt\" (UniqueName: \"kubernetes.io/projected/20376915-18d2-4c02-bfdd-eede7902927c-kube-api-access-flvpt\") pod \"20376915-18d2-4c02-bfdd-eede7902927c\" (UID: \"20376915-18d2-4c02-bfdd-eede7902927c\") " Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.766563 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-utilities" (OuterVolumeSpecName: "utilities") pod "20376915-18d2-4c02-bfdd-eede7902927c" (UID: "20376915-18d2-4c02-bfdd-eede7902927c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.773074 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20376915-18d2-4c02-bfdd-eede7902927c-kube-api-access-flvpt" (OuterVolumeSpecName: "kube-api-access-flvpt") pod "20376915-18d2-4c02-bfdd-eede7902927c" (UID: "20376915-18d2-4c02-bfdd-eede7902927c"). InnerVolumeSpecName "kube-api-access-flvpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.815215 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20376915-18d2-4c02-bfdd-eede7902927c" (UID: "20376915-18d2-4c02-bfdd-eede7902927c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.867726 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flvpt\" (UniqueName: \"kubernetes.io/projected/20376915-18d2-4c02-bfdd-eede7902927c-kube-api-access-flvpt\") on node \"crc\" DevicePath \"\"" Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.867775 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.867784 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20376915-18d2-4c02-bfdd-eede7902927c-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.912497 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5hp5n" event={"ID":"20376915-18d2-4c02-bfdd-eede7902927c","Type":"ContainerDied","Data":"0f7af5806a518a47cec9eba7e6aa4552445c81e13e066fc0ab756f49a577ebf7"} Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.912571 4780 scope.go:117] "RemoveContainer" containerID="6e29c2b9fac1ed7c781190d522486cd9067231353c9f62fd652508a251250a37" Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.912504 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5hp5n" Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.938794 4780 scope.go:117] "RemoveContainer" containerID="ba3d89dbcf600d1ba13f6667188da3de22dc3bd262e546fa114d7ee57b5b6ebe" Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.955435 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5hp5n"] Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.962165 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5hp5n"] Dec 05 07:21:38 crc kubenswrapper[4780]: I1205 07:21:38.979577 4780 scope.go:117] "RemoveContainer" containerID="d7e747f8e0fa6e7e02b0852fa78237bf71168923d01fbc885d009f45c4f4638f" Dec 05 07:21:40 crc kubenswrapper[4780]: I1205 07:21:40.147564 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20376915-18d2-4c02-bfdd-eede7902927c" path="/var/lib/kubelet/pods/20376915-18d2-4c02-bfdd-eede7902927c/volumes" Dec 05 07:21:59 crc kubenswrapper[4780]: I1205 07:21:59.907804 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:21:59 crc kubenswrapper[4780]: I1205 07:21:59.908419 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:22:29 crc kubenswrapper[4780]: I1205 07:22:29.907740 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:22:29 crc kubenswrapper[4780]: I1205 07:22:29.908273 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:22:29 crc kubenswrapper[4780]: I1205 07:22:29.908316 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:22:29 crc kubenswrapper[4780]: I1205 07:22:29.909044 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"422b3aca6aa7b99690985de35a18b98eeb386a21d7488982213818b1a9878ded"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:22:29 crc kubenswrapper[4780]: I1205 07:22:29.909114 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://422b3aca6aa7b99690985de35a18b98eeb386a21d7488982213818b1a9878ded" gracePeriod=600 Dec 05 07:22:30 crc kubenswrapper[4780]: I1205 07:22:30.282521 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="422b3aca6aa7b99690985de35a18b98eeb386a21d7488982213818b1a9878ded" exitCode=0 Dec 05 07:22:30 crc kubenswrapper[4780]: I1205 07:22:30.282863 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"422b3aca6aa7b99690985de35a18b98eeb386a21d7488982213818b1a9878ded"} Dec 05 07:22:30 crc kubenswrapper[4780]: I1205 07:22:30.282922 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477"} Dec 05 07:22:30 crc kubenswrapper[4780]: I1205 07:22:30.282939 4780 scope.go:117] "RemoveContainer" containerID="51fcb9a138d77e02f2d12c03f9acbebe48a9c1771481039c4218f5385eaa4e5a" Dec 05 07:24:23 crc kubenswrapper[4780]: I1205 07:24:23.755369 4780 scope.go:117] "RemoveContainer" containerID="ed08d36bf54642e8a425c530a61de5e79ba4ceda1f2cf644e65f3c03f0e0304d" Dec 05 07:24:23 crc kubenswrapper[4780]: I1205 07:24:23.786091 4780 scope.go:117] "RemoveContainer" containerID="2d1612e9bc63820a79c7b75d24d2ae003be1f24eec28b3b55ea42d700f1dec97" Dec 05 07:24:23 crc kubenswrapper[4780]: I1205 07:24:23.810258 4780 scope.go:117] "RemoveContainer" containerID="f693b90404f756e759beb8f8494b66d36a37c42631b11aec285462bf9dfa3ecb" Dec 05 07:24:59 crc kubenswrapper[4780]: I1205 07:24:59.908310 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:24:59 crc kubenswrapper[4780]: I1205 07:24:59.909029 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:25:29 crc kubenswrapper[4780]: I1205 07:25:29.908188 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:25:29 crc kubenswrapper[4780]: I1205 07:25:29.908687 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:25:59 crc kubenswrapper[4780]: I1205 07:25:59.908601 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:25:59 crc kubenswrapper[4780]: I1205 07:25:59.909257 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:25:59 crc kubenswrapper[4780]: I1205 07:25:59.909313 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:25:59 crc kubenswrapper[4780]: I1205 07:25:59.910233 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:25:59 crc kubenswrapper[4780]: I1205 07:25:59.910289 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" gracePeriod=600 Dec 05 07:26:00 crc kubenswrapper[4780]: E1205 07:26:00.063318 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:26:00 crc kubenswrapper[4780]: I1205 07:26:00.799647 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" exitCode=0 Dec 05 07:26:00 crc kubenswrapper[4780]: I1205 07:26:00.799705 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477"} Dec 05 07:26:00 crc kubenswrapper[4780]: I1205 07:26:00.799753 4780 scope.go:117] "RemoveContainer" containerID="422b3aca6aa7b99690985de35a18b98eeb386a21d7488982213818b1a9878ded" Dec 05 07:26:00 crc kubenswrapper[4780]: I1205 07:26:00.800373 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:26:00 crc kubenswrapper[4780]: E1205 07:26:00.800604 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:26:13 crc kubenswrapper[4780]: I1205 07:26:13.139050 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:26:13 crc kubenswrapper[4780]: E1205 07:26:13.139706 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:26:28 crc kubenswrapper[4780]: I1205 07:26:28.139306 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:26:28 crc kubenswrapper[4780]: E1205 07:26:28.139955 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:26:41 crc kubenswrapper[4780]: I1205 07:26:41.139291 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:26:41 crc kubenswrapper[4780]: E1205 07:26:41.140436 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:26:54 crc kubenswrapper[4780]: I1205 07:26:54.139102 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:26:54 crc kubenswrapper[4780]: E1205 07:26:54.139774 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:27:08 crc kubenswrapper[4780]: I1205 07:27:08.139704 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:27:08 crc kubenswrapper[4780]: E1205 07:27:08.140520 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:27:23 crc kubenswrapper[4780]: I1205 07:27:23.138898 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:27:23 crc kubenswrapper[4780]: E1205 07:27:23.139624 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:27:35 crc kubenswrapper[4780]: I1205 07:27:35.139129 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:27:35 crc kubenswrapper[4780]: E1205 07:27:35.139976 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:27:47 crc kubenswrapper[4780]: I1205 07:27:47.138978 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:27:47 crc kubenswrapper[4780]: E1205 07:27:47.139682 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:28:00 crc kubenswrapper[4780]: I1205 07:28:00.138238 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:28:00 crc kubenswrapper[4780]: E1205 07:28:00.138937 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:28:14 crc kubenswrapper[4780]: I1205 07:28:14.138986 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:28:14 crc kubenswrapper[4780]: E1205 07:28:14.139791 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:28:26 crc kubenswrapper[4780]: I1205 07:28:26.141937 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:28:26 crc kubenswrapper[4780]: E1205 07:28:26.142763 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:28:32 crc kubenswrapper[4780]: I1205 07:28:32.800738 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v9q7w"] Dec 05 07:28:32 crc kubenswrapper[4780]: E1205 07:28:32.804054 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20376915-18d2-4c02-bfdd-eede7902927c" containerName="registry-server" Dec 05 07:28:32 crc kubenswrapper[4780]: I1205 07:28:32.804181 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="20376915-18d2-4c02-bfdd-eede7902927c" containerName="registry-server" Dec 05 07:28:32 crc kubenswrapper[4780]: E1205 07:28:32.804290 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20376915-18d2-4c02-bfdd-eede7902927c" containerName="extract-utilities" Dec 05 07:28:32 crc kubenswrapper[4780]: I1205 07:28:32.804556 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="20376915-18d2-4c02-bfdd-eede7902927c" containerName="extract-utilities" Dec 05 07:28:32 crc kubenswrapper[4780]: E1205 07:28:32.804652 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20376915-18d2-4c02-bfdd-eede7902927c" containerName="extract-content" Dec 05 07:28:32 crc kubenswrapper[4780]: I1205 07:28:32.804728 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="20376915-18d2-4c02-bfdd-eede7902927c" containerName="extract-content" Dec 05 07:28:32 crc kubenswrapper[4780]: I1205 07:28:32.805075 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="20376915-18d2-4c02-bfdd-eede7902927c" containerName="registry-server" Dec 05 07:28:32 crc kubenswrapper[4780]: I1205 07:28:32.807216 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:32 crc kubenswrapper[4780]: I1205 07:28:32.811904 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9q7w"] Dec 05 07:28:32 crc kubenswrapper[4780]: I1205 07:28:32.904976 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-catalog-content\") pod \"redhat-operators-v9q7w\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:32 crc kubenswrapper[4780]: I1205 07:28:32.905068 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h84c\" (UniqueName: \"kubernetes.io/projected/b24cac34-5486-4859-80a4-a786d0206257-kube-api-access-8h84c\") pod \"redhat-operators-v9q7w\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:32 crc kubenswrapper[4780]: I1205 07:28:32.905114 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-utilities\") pod \"redhat-operators-v9q7w\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.006911 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h84c\" (UniqueName: \"kubernetes.io/projected/b24cac34-5486-4859-80a4-a786d0206257-kube-api-access-8h84c\") pod \"redhat-operators-v9q7w\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.007273 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-utilities\") pod \"redhat-operators-v9q7w\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.007421 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-catalog-content\") pod \"redhat-operators-v9q7w\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.007753 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-utilities\") pod \"redhat-operators-v9q7w\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.007905 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-catalog-content\") pod \"redhat-operators-v9q7w\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.027603 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h84c\" (UniqueName: \"kubernetes.io/projected/b24cac34-5486-4859-80a4-a786d0206257-kube-api-access-8h84c\") pod \"redhat-operators-v9q7w\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.139520 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.368099 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9q7w"] Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.956636 4780 generic.go:334] "Generic (PLEG): container finished" podID="b24cac34-5486-4859-80a4-a786d0206257" containerID="9cc0c74fd0b5e54f0ca85c01be19bf5bcd46c19b60442f65e6406eedc3a5de8f" exitCode=0 Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.957115 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9q7w" event={"ID":"b24cac34-5486-4859-80a4-a786d0206257","Type":"ContainerDied","Data":"9cc0c74fd0b5e54f0ca85c01be19bf5bcd46c19b60442f65e6406eedc3a5de8f"} Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.958242 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9q7w" event={"ID":"b24cac34-5486-4859-80a4-a786d0206257","Type":"ContainerStarted","Data":"c8e128dc2a0c275fe8c5296c6f46ec82ec19f9ee210b306f3f8bf1b906a4d519"} Dec 05 07:28:33 crc kubenswrapper[4780]: I1205 07:28:33.962514 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 07:28:34 crc kubenswrapper[4780]: I1205 07:28:34.972343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9q7w" event={"ID":"b24cac34-5486-4859-80a4-a786d0206257","Type":"ContainerStarted","Data":"6837f4e3466388a64f2a4fb617bdc8b1e8819fc37df88df08d490aee82201c08"} Dec 05 07:28:35 crc kubenswrapper[4780]: I1205 07:28:35.981380 4780 generic.go:334] "Generic (PLEG): container finished" podID="b24cac34-5486-4859-80a4-a786d0206257" containerID="6837f4e3466388a64f2a4fb617bdc8b1e8819fc37df88df08d490aee82201c08" exitCode=0 Dec 05 07:28:35 crc kubenswrapper[4780]: I1205 07:28:35.981440 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9q7w" event={"ID":"b24cac34-5486-4859-80a4-a786d0206257","Type":"ContainerDied","Data":"6837f4e3466388a64f2a4fb617bdc8b1e8819fc37df88df08d490aee82201c08"} Dec 05 07:28:36 crc kubenswrapper[4780]: I1205 07:28:36.992532 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9q7w" event={"ID":"b24cac34-5486-4859-80a4-a786d0206257","Type":"ContainerStarted","Data":"a8baa3214817441c7570307b00dea1a1e656e476740d145469432dbe94fcbd4d"} Dec 05 07:28:37 crc kubenswrapper[4780]: I1205 07:28:37.010987 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v9q7w" podStartSLOduration=2.603374082 podStartE2EDuration="5.01096953s" podCreationTimestamp="2025-12-05 07:28:32 +0000 UTC" firstStartedPulling="2025-12-05 07:28:33.960008938 +0000 UTC m=+2548.029525270" lastFinishedPulling="2025-12-05 07:28:36.367604386 +0000 UTC m=+2550.437120718" observedRunningTime="2025-12-05 07:28:37.007790884 +0000 UTC m=+2551.077307216" watchObservedRunningTime="2025-12-05 07:28:37.01096953 +0000 UTC m=+2551.080485862" Dec 05 07:28:37 crc kubenswrapper[4780]: I1205 07:28:37.139229 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:28:37 crc kubenswrapper[4780]: E1205 07:28:37.139464 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:28:43 crc kubenswrapper[4780]: I1205 07:28:43.140630 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:43 crc kubenswrapper[4780]: I1205 07:28:43.141834 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:43 crc kubenswrapper[4780]: I1205 07:28:43.188864 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:44 crc kubenswrapper[4780]: I1205 07:28:44.077957 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:44 crc kubenswrapper[4780]: I1205 07:28:44.116701 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9q7w"] Dec 05 07:28:46 crc kubenswrapper[4780]: I1205 07:28:46.053490 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v9q7w" podUID="b24cac34-5486-4859-80a4-a786d0206257" containerName="registry-server" containerID="cri-o://a8baa3214817441c7570307b00dea1a1e656e476740d145469432dbe94fcbd4d" gracePeriod=2 Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.093657 4780 generic.go:334] "Generic (PLEG): container finished" podID="b24cac34-5486-4859-80a4-a786d0206257" containerID="a8baa3214817441c7570307b00dea1a1e656e476740d145469432dbe94fcbd4d" exitCode=0 Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.093726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9q7w" event={"ID":"b24cac34-5486-4859-80a4-a786d0206257","Type":"ContainerDied","Data":"a8baa3214817441c7570307b00dea1a1e656e476740d145469432dbe94fcbd4d"} Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.257814 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.323031 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-catalog-content\") pod \"b24cac34-5486-4859-80a4-a786d0206257\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.323123 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-utilities\") pod \"b24cac34-5486-4859-80a4-a786d0206257\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.323204 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h84c\" (UniqueName: \"kubernetes.io/projected/b24cac34-5486-4859-80a4-a786d0206257-kube-api-access-8h84c\") pod \"b24cac34-5486-4859-80a4-a786d0206257\" (UID: \"b24cac34-5486-4859-80a4-a786d0206257\") " Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.324175 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-utilities" (OuterVolumeSpecName: "utilities") pod "b24cac34-5486-4859-80a4-a786d0206257" (UID: "b24cac34-5486-4859-80a4-a786d0206257"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.329663 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24cac34-5486-4859-80a4-a786d0206257-kube-api-access-8h84c" (OuterVolumeSpecName: "kube-api-access-8h84c") pod "b24cac34-5486-4859-80a4-a786d0206257" (UID: "b24cac34-5486-4859-80a4-a786d0206257"). InnerVolumeSpecName "kube-api-access-8h84c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.424697 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.424731 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h84c\" (UniqueName: \"kubernetes.io/projected/b24cac34-5486-4859-80a4-a786d0206257-kube-api-access-8h84c\") on node \"crc\" DevicePath \"\"" Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.438191 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b24cac34-5486-4859-80a4-a786d0206257" (UID: "b24cac34-5486-4859-80a4-a786d0206257"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:28:48 crc kubenswrapper[4780]: I1205 07:28:48.525894 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b24cac34-5486-4859-80a4-a786d0206257-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:28:49 crc kubenswrapper[4780]: I1205 07:28:49.104915 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9q7w" event={"ID":"b24cac34-5486-4859-80a4-a786d0206257","Type":"ContainerDied","Data":"c8e128dc2a0c275fe8c5296c6f46ec82ec19f9ee210b306f3f8bf1b906a4d519"} Dec 05 07:28:49 crc kubenswrapper[4780]: I1205 07:28:49.105010 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9q7w" Dec 05 07:28:49 crc kubenswrapper[4780]: I1205 07:28:49.105319 4780 scope.go:117] "RemoveContainer" containerID="a8baa3214817441c7570307b00dea1a1e656e476740d145469432dbe94fcbd4d" Dec 05 07:28:49 crc kubenswrapper[4780]: I1205 07:28:49.126978 4780 scope.go:117] "RemoveContainer" containerID="6837f4e3466388a64f2a4fb617bdc8b1e8819fc37df88df08d490aee82201c08" Dec 05 07:28:49 crc kubenswrapper[4780]: I1205 07:28:49.152744 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9q7w"] Dec 05 07:28:49 crc kubenswrapper[4780]: I1205 07:28:49.160951 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v9q7w"] Dec 05 07:28:49 crc kubenswrapper[4780]: I1205 07:28:49.173134 4780 scope.go:117] "RemoveContainer" containerID="9cc0c74fd0b5e54f0ca85c01be19bf5bcd46c19b60442f65e6406eedc3a5de8f" Dec 05 07:28:50 crc kubenswrapper[4780]: I1205 07:28:50.139010 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:28:50 crc kubenswrapper[4780]: E1205 07:28:50.139353 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:28:50 crc kubenswrapper[4780]: I1205 07:28:50.147740 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24cac34-5486-4859-80a4-a786d0206257" path="/var/lib/kubelet/pods/b24cac34-5486-4859-80a4-a786d0206257/volumes" Dec 05 07:29:01 crc kubenswrapper[4780]: I1205 07:29:01.138426 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:29:01 crc kubenswrapper[4780]: E1205 07:29:01.139520 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:29:12 crc kubenswrapper[4780]: I1205 07:29:12.139254 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:29:12 crc kubenswrapper[4780]: E1205 07:29:12.139891 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:29:24 crc kubenswrapper[4780]: I1205 07:29:24.139137 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:29:24 crc kubenswrapper[4780]: E1205 07:29:24.139940 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:29:35 crc kubenswrapper[4780]: I1205 07:29:35.139670 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:29:35 crc kubenswrapper[4780]: E1205 07:29:35.140474 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:29:46 crc kubenswrapper[4780]: I1205 07:29:46.145224 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:29:46 crc kubenswrapper[4780]: E1205 07:29:46.146491 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.139828 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:30:00 crc kubenswrapper[4780]: E1205 07:30:00.140786 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.163317 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7"] Dec 05 07:30:00 crc kubenswrapper[4780]: E1205 07:30:00.163942 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24cac34-5486-4859-80a4-a786d0206257" containerName="extract-content" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.163974 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24cac34-5486-4859-80a4-a786d0206257" containerName="extract-content" Dec 05 07:30:00 crc kubenswrapper[4780]: E1205 07:30:00.163997 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24cac34-5486-4859-80a4-a786d0206257" containerName="extract-utilities" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.164008 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24cac34-5486-4859-80a4-a786d0206257" containerName="extract-utilities" Dec 05 07:30:00 crc kubenswrapper[4780]: E1205 07:30:00.164049 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24cac34-5486-4859-80a4-a786d0206257" containerName="registry-server" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.164061 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24cac34-5486-4859-80a4-a786d0206257" containerName="registry-server" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.164259 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24cac34-5486-4859-80a4-a786d0206257" containerName="registry-server" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.165034 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.168383 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.168611 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.172227 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7"] Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.262777 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6d66a54-fe88-4e26-a374-70df7e86c9ea-secret-volume\") pod \"collect-profiles-29415330-4rgw7\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.262849 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm7vl\" (UniqueName: \"kubernetes.io/projected/a6d66a54-fe88-4e26-a374-70df7e86c9ea-kube-api-access-jm7vl\") pod \"collect-profiles-29415330-4rgw7\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.262990 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d66a54-fe88-4e26-a374-70df7e86c9ea-config-volume\") pod \"collect-profiles-29415330-4rgw7\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.363767 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d66a54-fe88-4e26-a374-70df7e86c9ea-config-volume\") pod \"collect-profiles-29415330-4rgw7\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.363906 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6d66a54-fe88-4e26-a374-70df7e86c9ea-secret-volume\") pod \"collect-profiles-29415330-4rgw7\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.363938 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm7vl\" (UniqueName: \"kubernetes.io/projected/a6d66a54-fe88-4e26-a374-70df7e86c9ea-kube-api-access-jm7vl\") pod \"collect-profiles-29415330-4rgw7\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.365348 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d66a54-fe88-4e26-a374-70df7e86c9ea-config-volume\") pod \"collect-profiles-29415330-4rgw7\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.370602 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6d66a54-fe88-4e26-a374-70df7e86c9ea-secret-volume\") pod \"collect-profiles-29415330-4rgw7\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.382647 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm7vl\" (UniqueName: \"kubernetes.io/projected/a6d66a54-fe88-4e26-a374-70df7e86c9ea-kube-api-access-jm7vl\") pod \"collect-profiles-29415330-4rgw7\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.490695 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:00 crc kubenswrapper[4780]: I1205 07:30:00.934771 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7"] Dec 05 07:30:01 crc kubenswrapper[4780]: I1205 07:30:01.012492 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" event={"ID":"a6d66a54-fe88-4e26-a374-70df7e86c9ea","Type":"ContainerStarted","Data":"a227a13337281b0cd48e5896b007716ff771eb28d094169e0887f41120e8feb4"} Dec 05 07:30:02 crc kubenswrapper[4780]: I1205 07:30:02.020478 4780 generic.go:334] "Generic (PLEG): container finished" podID="a6d66a54-fe88-4e26-a374-70df7e86c9ea" containerID="4c7eaf1873617357f9731db8f19194c7241afa46d79a0f4d42926494706629d4" exitCode=0 Dec 05 07:30:02 crc kubenswrapper[4780]: I1205 07:30:02.020579 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" event={"ID":"a6d66a54-fe88-4e26-a374-70df7e86c9ea","Type":"ContainerDied","Data":"4c7eaf1873617357f9731db8f19194c7241afa46d79a0f4d42926494706629d4"} Dec 05 07:30:03 crc kubenswrapper[4780]: I1205 07:30:03.259333 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:03 crc kubenswrapper[4780]: I1205 07:30:03.403848 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d66a54-fe88-4e26-a374-70df7e86c9ea-config-volume\") pod \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " Dec 05 07:30:03 crc kubenswrapper[4780]: I1205 07:30:03.404251 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6d66a54-fe88-4e26-a374-70df7e86c9ea-secret-volume\") pod \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " Dec 05 07:30:03 crc kubenswrapper[4780]: I1205 07:30:03.404358 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm7vl\" (UniqueName: \"kubernetes.io/projected/a6d66a54-fe88-4e26-a374-70df7e86c9ea-kube-api-access-jm7vl\") pod \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\" (UID: \"a6d66a54-fe88-4e26-a374-70df7e86c9ea\") " Dec 05 07:30:03 crc kubenswrapper[4780]: I1205 07:30:03.405151 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d66a54-fe88-4e26-a374-70df7e86c9ea-config-volume" (OuterVolumeSpecName: "config-volume") pod "a6d66a54-fe88-4e26-a374-70df7e86c9ea" (UID: "a6d66a54-fe88-4e26-a374-70df7e86c9ea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:30:03 crc kubenswrapper[4780]: I1205 07:30:03.410405 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d66a54-fe88-4e26-a374-70df7e86c9ea-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a6d66a54-fe88-4e26-a374-70df7e86c9ea" (UID: "a6d66a54-fe88-4e26-a374-70df7e86c9ea"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:30:03 crc kubenswrapper[4780]: I1205 07:30:03.411196 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d66a54-fe88-4e26-a374-70df7e86c9ea-kube-api-access-jm7vl" (OuterVolumeSpecName: "kube-api-access-jm7vl") pod "a6d66a54-fe88-4e26-a374-70df7e86c9ea" (UID: "a6d66a54-fe88-4e26-a374-70df7e86c9ea"). InnerVolumeSpecName "kube-api-access-jm7vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:30:03 crc kubenswrapper[4780]: I1205 07:30:03.509264 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6d66a54-fe88-4e26-a374-70df7e86c9ea-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 07:30:03 crc kubenswrapper[4780]: I1205 07:30:03.509323 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm7vl\" (UniqueName: \"kubernetes.io/projected/a6d66a54-fe88-4e26-a374-70df7e86c9ea-kube-api-access-jm7vl\") on node \"crc\" DevicePath \"\"" Dec 05 07:30:03 crc kubenswrapper[4780]: I1205 07:30:03.509402 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6d66a54-fe88-4e26-a374-70df7e86c9ea-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 07:30:04 crc kubenswrapper[4780]: I1205 07:30:04.035697 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" event={"ID":"a6d66a54-fe88-4e26-a374-70df7e86c9ea","Type":"ContainerDied","Data":"a227a13337281b0cd48e5896b007716ff771eb28d094169e0887f41120e8feb4"} Dec 05 07:30:04 crc kubenswrapper[4780]: I1205 07:30:04.035741 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a227a13337281b0cd48e5896b007716ff771eb28d094169e0887f41120e8feb4" Dec 05 07:30:04 crc kubenswrapper[4780]: I1205 07:30:04.035757 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7" Dec 05 07:30:04 crc kubenswrapper[4780]: I1205 07:30:04.321996 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s"] Dec 05 07:30:04 crc kubenswrapper[4780]: I1205 07:30:04.326934 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415285-24x6s"] Dec 05 07:30:06 crc kubenswrapper[4780]: I1205 07:30:06.147184 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8260b9e3-bfa3-4d9a-9af8-4764100b21c0" path="/var/lib/kubelet/pods/8260b9e3-bfa3-4d9a-9af8-4764100b21c0/volumes" Dec 05 07:30:11 crc kubenswrapper[4780]: I1205 07:30:11.139072 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:30:11 crc kubenswrapper[4780]: E1205 07:30:11.139534 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:30:23 crc kubenswrapper[4780]: I1205 07:30:23.945352 4780 scope.go:117] "RemoveContainer" containerID="be6a707d5bf712807ae271f65fe5611e82ecf931f1527de203b9e0790a6af188" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:23.999982 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s4zgh"] Dec 05 07:30:24 crc kubenswrapper[4780]: E1205 07:30:24.000331 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d66a54-fe88-4e26-a374-70df7e86c9ea" containerName="collect-profiles" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.000352 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d66a54-fe88-4e26-a374-70df7e86c9ea" containerName="collect-profiles" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.000533 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d66a54-fe88-4e26-a374-70df7e86c9ea" containerName="collect-profiles" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.001692 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.016563 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s4zgh"] Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.090970 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-utilities\") pod \"certified-operators-s4zgh\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.091078 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-catalog-content\") pod \"certified-operators-s4zgh\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.091132 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ttj\" (UniqueName: \"kubernetes.io/projected/e798b9fb-2ff6-4194-874a-db43fb05a516-kube-api-access-d8ttj\") pod \"certified-operators-s4zgh\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.139617 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:30:24 crc kubenswrapper[4780]: E1205 07:30:24.139872 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.192652 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ttj\" (UniqueName: \"kubernetes.io/projected/e798b9fb-2ff6-4194-874a-db43fb05a516-kube-api-access-d8ttj\") pod \"certified-operators-s4zgh\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.192766 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-utilities\") pod \"certified-operators-s4zgh\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.192828 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-catalog-content\") pod \"certified-operators-s4zgh\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.193347 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-catalog-content\") pod \"certified-operators-s4zgh\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.193523 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-utilities\") pod \"certified-operators-s4zgh\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.211636 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ttj\" (UniqueName: \"kubernetes.io/projected/e798b9fb-2ff6-4194-874a-db43fb05a516-kube-api-access-d8ttj\") pod \"certified-operators-s4zgh\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.325550 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:24 crc kubenswrapper[4780]: I1205 07:30:24.881382 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s4zgh"] Dec 05 07:30:25 crc kubenswrapper[4780]: I1205 07:30:25.185251 4780 generic.go:334] "Generic (PLEG): container finished" podID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerID="5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d" exitCode=0 Dec 05 07:30:25 crc kubenswrapper[4780]: I1205 07:30:25.185368 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4zgh" event={"ID":"e798b9fb-2ff6-4194-874a-db43fb05a516","Type":"ContainerDied","Data":"5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d"} Dec 05 07:30:25 crc kubenswrapper[4780]: I1205 07:30:25.185603 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4zgh" event={"ID":"e798b9fb-2ff6-4194-874a-db43fb05a516","Type":"ContainerStarted","Data":"9e4df3741a07edb5d3d1ca72a364b2853a05a034ed2396eaba706b0763ac2e86"} Dec 05 07:30:26 crc kubenswrapper[4780]: I1205 07:30:26.193835 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4zgh" event={"ID":"e798b9fb-2ff6-4194-874a-db43fb05a516","Type":"ContainerStarted","Data":"3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775"} Dec 05 07:30:27 crc kubenswrapper[4780]: I1205 07:30:27.222900 4780 generic.go:334] "Generic (PLEG): container finished" podID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerID="3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775" exitCode=0 Dec 05 07:30:27 crc kubenswrapper[4780]: I1205 07:30:27.222991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4zgh" event={"ID":"e798b9fb-2ff6-4194-874a-db43fb05a516","Type":"ContainerDied","Data":"3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775"} Dec 05 07:30:28 crc kubenswrapper[4780]: I1205 07:30:28.231991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4zgh" event={"ID":"e798b9fb-2ff6-4194-874a-db43fb05a516","Type":"ContainerStarted","Data":"0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc"} Dec 05 07:30:28 crc kubenswrapper[4780]: I1205 07:30:28.247831 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s4zgh" podStartSLOduration=2.8593857209999998 podStartE2EDuration="5.247807929s" podCreationTimestamp="2025-12-05 07:30:23 +0000 UTC" firstStartedPulling="2025-12-05 07:30:25.18697474 +0000 UTC m=+2659.256491062" lastFinishedPulling="2025-12-05 07:30:27.575396928 +0000 UTC m=+2661.644913270" observedRunningTime="2025-12-05 07:30:28.244607243 +0000 UTC m=+2662.314123575" watchObservedRunningTime="2025-12-05 07:30:28.247807929 +0000 UTC m=+2662.317324261" Dec 05 07:30:34 crc kubenswrapper[4780]: I1205 07:30:34.328137 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:34 crc kubenswrapper[4780]: I1205 07:30:34.328501 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:34 crc kubenswrapper[4780]: I1205 07:30:34.370423 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:35 crc kubenswrapper[4780]: I1205 07:30:35.331571 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:35 crc kubenswrapper[4780]: I1205 07:30:35.376474 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s4zgh"] Dec 05 07:30:36 crc kubenswrapper[4780]: I1205 07:30:36.143289 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:30:36 crc kubenswrapper[4780]: E1205 07:30:36.143523 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:30:37 crc kubenswrapper[4780]: I1205 07:30:37.299765 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s4zgh" podUID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerName="registry-server" containerID="cri-o://0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc" gracePeriod=2 Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.169506 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.308188 4780 generic.go:334] "Generic (PLEG): container finished" podID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerID="0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc" exitCode=0 Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.308258 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4zgh" event={"ID":"e798b9fb-2ff6-4194-874a-db43fb05a516","Type":"ContainerDied","Data":"0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc"} Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.308334 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4zgh" event={"ID":"e798b9fb-2ff6-4194-874a-db43fb05a516","Type":"ContainerDied","Data":"9e4df3741a07edb5d3d1ca72a364b2853a05a034ed2396eaba706b0763ac2e86"} Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.308357 4780 scope.go:117] "RemoveContainer" containerID="0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.308275 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4zgh" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.315958 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-catalog-content\") pod \"e798b9fb-2ff6-4194-874a-db43fb05a516\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.316054 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8ttj\" (UniqueName: \"kubernetes.io/projected/e798b9fb-2ff6-4194-874a-db43fb05a516-kube-api-access-d8ttj\") pod \"e798b9fb-2ff6-4194-874a-db43fb05a516\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.316189 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-utilities\") pod \"e798b9fb-2ff6-4194-874a-db43fb05a516\" (UID: \"e798b9fb-2ff6-4194-874a-db43fb05a516\") " Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.317457 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-utilities" (OuterVolumeSpecName: "utilities") pod "e798b9fb-2ff6-4194-874a-db43fb05a516" (UID: "e798b9fb-2ff6-4194-874a-db43fb05a516"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.333150 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e798b9fb-2ff6-4194-874a-db43fb05a516-kube-api-access-d8ttj" (OuterVolumeSpecName: "kube-api-access-d8ttj") pod "e798b9fb-2ff6-4194-874a-db43fb05a516" (UID: "e798b9fb-2ff6-4194-874a-db43fb05a516"). InnerVolumeSpecName "kube-api-access-d8ttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.334109 4780 scope.go:117] "RemoveContainer" containerID="3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.365907 4780 scope.go:117] "RemoveContainer" containerID="5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.372189 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e798b9fb-2ff6-4194-874a-db43fb05a516" (UID: "e798b9fb-2ff6-4194-874a-db43fb05a516"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.390938 4780 scope.go:117] "RemoveContainer" containerID="0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc" Dec 05 07:30:38 crc kubenswrapper[4780]: E1205 07:30:38.391475 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc\": container with ID starting with 0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc not found: ID does not exist" containerID="0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.391504 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc"} err="failed to get container status \"0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc\": rpc error: code = NotFound desc = could not find container \"0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc\": container with ID starting with 0c012aa44a1e7c5a7b7af467ef68238fcfa3ed03848ee93a897ddaaf7ea1b7bc not found: ID does not exist" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.391523 4780 scope.go:117] "RemoveContainer" containerID="3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775" Dec 05 07:30:38 crc kubenswrapper[4780]: E1205 07:30:38.391944 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775\": container with ID starting with 3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775 not found: ID does not exist" containerID="3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.391970 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775"} err="failed to get container status \"3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775\": rpc error: code = NotFound desc = could not find container \"3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775\": container with ID starting with 3db1e811903e9a2a34720d1c70b3ecf8957d5bc55e7ed76a81a1a013494e2775 not found: ID does not exist" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.391985 4780 scope.go:117] "RemoveContainer" containerID="5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d" Dec 05 07:30:38 crc kubenswrapper[4780]: E1205 07:30:38.392589 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d\": container with ID starting with 5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d not found: ID does not exist" containerID="5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.392608 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d"} err="failed to get container status \"5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d\": rpc error: code = NotFound desc = could not find container \"5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d\": container with ID starting with 5dab69d2908df4041bf5164acd31dcd1e399c57f3c3325e19d662faadb21487d not found: ID does not exist" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.418082 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8ttj\" (UniqueName: \"kubernetes.io/projected/e798b9fb-2ff6-4194-874a-db43fb05a516-kube-api-access-d8ttj\") on node \"crc\" DevicePath \"\"" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.418117 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.418130 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e798b9fb-2ff6-4194-874a-db43fb05a516-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.645091 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s4zgh"] Dec 05 07:30:38 crc kubenswrapper[4780]: I1205 07:30:38.651245 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s4zgh"] Dec 05 07:30:40 crc kubenswrapper[4780]: I1205 07:30:40.153352 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e798b9fb-2ff6-4194-874a-db43fb05a516" path="/var/lib/kubelet/pods/e798b9fb-2ff6-4194-874a-db43fb05a516/volumes" Dec 05 07:30:51 crc kubenswrapper[4780]: I1205 07:30:51.138712 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:30:51 crc kubenswrapper[4780]: E1205 07:30:51.139944 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:31:03 crc kubenswrapper[4780]: I1205 07:31:03.139297 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:31:04 crc kubenswrapper[4780]: I1205 07:31:04.478179 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"d829090ba30b01359d015ba9f6b8f803766e657b97a0050b18d00c93ab5bfe06"} Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.409546 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l5rrk"] Dec 05 07:31:42 crc kubenswrapper[4780]: E1205 07:31:42.410478 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerName="extract-utilities" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.410494 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerName="extract-utilities" Dec 05 07:31:42 crc kubenswrapper[4780]: E1205 07:31:42.410518 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerName="registry-server" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.410526 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerName="registry-server" Dec 05 07:31:42 crc kubenswrapper[4780]: E1205 07:31:42.410544 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerName="extract-content" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.410552 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerName="extract-content" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.410760 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e798b9fb-2ff6-4194-874a-db43fb05a516" containerName="registry-server" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.412602 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.428768 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l5rrk"] Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.549981 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts2pw\" (UniqueName: \"kubernetes.io/projected/10b886b0-3256-4969-9fec-9381e3af75dc-kube-api-access-ts2pw\") pod \"community-operators-l5rrk\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.550310 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-utilities\") pod \"community-operators-l5rrk\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.550575 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-catalog-content\") pod \"community-operators-l5rrk\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.652959 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts2pw\" (UniqueName: \"kubernetes.io/projected/10b886b0-3256-4969-9fec-9381e3af75dc-kube-api-access-ts2pw\") pod \"community-operators-l5rrk\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.653024 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-utilities\") pod \"community-operators-l5rrk\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.653074 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-catalog-content\") pod \"community-operators-l5rrk\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.653722 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-catalog-content\") pod \"community-operators-l5rrk\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.653825 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-utilities\") pod \"community-operators-l5rrk\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.684023 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts2pw\" (UniqueName: \"kubernetes.io/projected/10b886b0-3256-4969-9fec-9381e3af75dc-kube-api-access-ts2pw\") pod \"community-operators-l5rrk\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:42 crc kubenswrapper[4780]: I1205 07:31:42.739576 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:43 crc kubenswrapper[4780]: I1205 07:31:43.252799 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l5rrk"] Dec 05 07:31:43 crc kubenswrapper[4780]: E1205 07:31:43.557518 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10b886b0_3256_4969_9fec_9381e3af75dc.slice/crio-conmon-b18c2647341269dde5e4f5acbc48fdc17c292c18b63b3d1893ce7b03124b8b21.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10b886b0_3256_4969_9fec_9381e3af75dc.slice/crio-b18c2647341269dde5e4f5acbc48fdc17c292c18b63b3d1893ce7b03124b8b21.scope\": RecentStats: unable to find data in memory cache]" Dec 05 07:31:43 crc kubenswrapper[4780]: I1205 07:31:43.778017 4780 generic.go:334] "Generic (PLEG): container finished" podID="10b886b0-3256-4969-9fec-9381e3af75dc" containerID="b18c2647341269dde5e4f5acbc48fdc17c292c18b63b3d1893ce7b03124b8b21" exitCode=0 Dec 05 07:31:43 crc kubenswrapper[4780]: I1205 07:31:43.778069 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5rrk" event={"ID":"10b886b0-3256-4969-9fec-9381e3af75dc","Type":"ContainerDied","Data":"b18c2647341269dde5e4f5acbc48fdc17c292c18b63b3d1893ce7b03124b8b21"} Dec 05 07:31:43 crc kubenswrapper[4780]: I1205 07:31:43.778100 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5rrk" event={"ID":"10b886b0-3256-4969-9fec-9381e3af75dc","Type":"ContainerStarted","Data":"02cf8f02ca62346c354fda415a826a9dfaa828e96a1fd89ab7def3448ec0301a"} Dec 05 07:31:44 crc kubenswrapper[4780]: I1205 07:31:44.786112 4780 generic.go:334] "Generic (PLEG): container finished" podID="10b886b0-3256-4969-9fec-9381e3af75dc" containerID="b3cabe23721fbcc9a7d2cb7452df816187cfa2092c3f6973a03c75879f90fa89" exitCode=0 Dec 05 07:31:44 crc kubenswrapper[4780]: I1205 07:31:44.786302 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5rrk" event={"ID":"10b886b0-3256-4969-9fec-9381e3af75dc","Type":"ContainerDied","Data":"b3cabe23721fbcc9a7d2cb7452df816187cfa2092c3f6973a03c75879f90fa89"} Dec 05 07:31:45 crc kubenswrapper[4780]: I1205 07:31:45.794855 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5rrk" event={"ID":"10b886b0-3256-4969-9fec-9381e3af75dc","Type":"ContainerStarted","Data":"f1f7c7ee739c2b768486a460408a321ebd84ecfda95a8834d95ae1e194517fda"} Dec 05 07:31:45 crc kubenswrapper[4780]: I1205 07:31:45.817059 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l5rrk" podStartSLOduration=2.379061445 podStartE2EDuration="3.817042373s" podCreationTimestamp="2025-12-05 07:31:42 +0000 UTC" firstStartedPulling="2025-12-05 07:31:43.780409413 +0000 UTC m=+2737.849925765" lastFinishedPulling="2025-12-05 07:31:45.218390341 +0000 UTC m=+2739.287906693" observedRunningTime="2025-12-05 07:31:45.81251656 +0000 UTC m=+2739.882032902" watchObservedRunningTime="2025-12-05 07:31:45.817042373 +0000 UTC m=+2739.886558705" Dec 05 07:31:52 crc kubenswrapper[4780]: I1205 07:31:52.740060 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:52 crc kubenswrapper[4780]: I1205 07:31:52.740424 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:52 crc kubenswrapper[4780]: I1205 07:31:52.779028 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:52 crc kubenswrapper[4780]: I1205 07:31:52.890794 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:53 crc kubenswrapper[4780]: I1205 07:31:53.017206 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l5rrk"] Dec 05 07:31:54 crc kubenswrapper[4780]: I1205 07:31:54.856185 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l5rrk" podUID="10b886b0-3256-4969-9fec-9381e3af75dc" containerName="registry-server" containerID="cri-o://f1f7c7ee739c2b768486a460408a321ebd84ecfda95a8834d95ae1e194517fda" gracePeriod=2 Dec 05 07:31:55 crc kubenswrapper[4780]: I1205 07:31:55.865392 4780 generic.go:334] "Generic (PLEG): container finished" podID="10b886b0-3256-4969-9fec-9381e3af75dc" containerID="f1f7c7ee739c2b768486a460408a321ebd84ecfda95a8834d95ae1e194517fda" exitCode=0 Dec 05 07:31:55 crc kubenswrapper[4780]: I1205 07:31:55.865470 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5rrk" event={"ID":"10b886b0-3256-4969-9fec-9381e3af75dc","Type":"ContainerDied","Data":"f1f7c7ee739c2b768486a460408a321ebd84ecfda95a8834d95ae1e194517fda"} Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.408673 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.467155 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-catalog-content\") pod \"10b886b0-3256-4969-9fec-9381e3af75dc\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.467259 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-utilities\") pod \"10b886b0-3256-4969-9fec-9381e3af75dc\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.467407 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts2pw\" (UniqueName: \"kubernetes.io/projected/10b886b0-3256-4969-9fec-9381e3af75dc-kube-api-access-ts2pw\") pod \"10b886b0-3256-4969-9fec-9381e3af75dc\" (UID: \"10b886b0-3256-4969-9fec-9381e3af75dc\") " Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.468402 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-utilities" (OuterVolumeSpecName: "utilities") pod "10b886b0-3256-4969-9fec-9381e3af75dc" (UID: "10b886b0-3256-4969-9fec-9381e3af75dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.472775 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b886b0-3256-4969-9fec-9381e3af75dc-kube-api-access-ts2pw" (OuterVolumeSpecName: "kube-api-access-ts2pw") pod "10b886b0-3256-4969-9fec-9381e3af75dc" (UID: "10b886b0-3256-4969-9fec-9381e3af75dc"). InnerVolumeSpecName "kube-api-access-ts2pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.527256 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10b886b0-3256-4969-9fec-9381e3af75dc" (UID: "10b886b0-3256-4969-9fec-9381e3af75dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.569000 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.569047 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts2pw\" (UniqueName: \"kubernetes.io/projected/10b886b0-3256-4969-9fec-9381e3af75dc-kube-api-access-ts2pw\") on node \"crc\" DevicePath \"\"" Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.569057 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10b886b0-3256-4969-9fec-9381e3af75dc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.878818 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5rrk" event={"ID":"10b886b0-3256-4969-9fec-9381e3af75dc","Type":"ContainerDied","Data":"02cf8f02ca62346c354fda415a826a9dfaa828e96a1fd89ab7def3448ec0301a"} Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.878889 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5rrk" Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.878913 4780 scope.go:117] "RemoveContainer" containerID="f1f7c7ee739c2b768486a460408a321ebd84ecfda95a8834d95ae1e194517fda" Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.912253 4780 scope.go:117] "RemoveContainer" containerID="b3cabe23721fbcc9a7d2cb7452df816187cfa2092c3f6973a03c75879f90fa89" Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.915833 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l5rrk"] Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.924649 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l5rrk"] Dec 05 07:31:56 crc kubenswrapper[4780]: I1205 07:31:56.933347 4780 scope.go:117] "RemoveContainer" containerID="b18c2647341269dde5e4f5acbc48fdc17c292c18b63b3d1893ce7b03124b8b21" Dec 05 07:31:58 crc kubenswrapper[4780]: I1205 07:31:58.147479 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b886b0-3256-4969-9fec-9381e3af75dc" path="/var/lib/kubelet/pods/10b886b0-3256-4969-9fec-9381e3af75dc/volumes" Dec 05 07:33:29 crc kubenswrapper[4780]: I1205 07:33:29.908095 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:33:29 crc kubenswrapper[4780]: I1205 07:33:29.909102 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:33:59 crc kubenswrapper[4780]: I1205 07:33:59.907583 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:33:59 crc kubenswrapper[4780]: I1205 07:33:59.908259 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:34:29 crc kubenswrapper[4780]: I1205 07:34:29.908499 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:34:29 crc kubenswrapper[4780]: I1205 07:34:29.909336 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:34:29 crc kubenswrapper[4780]: I1205 07:34:29.909392 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:34:29 crc kubenswrapper[4780]: I1205 07:34:29.910220 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d829090ba30b01359d015ba9f6b8f803766e657b97a0050b18d00c93ab5bfe06"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:34:29 crc kubenswrapper[4780]: I1205 07:34:29.910306 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://d829090ba30b01359d015ba9f6b8f803766e657b97a0050b18d00c93ab5bfe06" gracePeriod=600 Dec 05 07:34:30 crc kubenswrapper[4780]: I1205 07:34:30.078090 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="d829090ba30b01359d015ba9f6b8f803766e657b97a0050b18d00c93ab5bfe06" exitCode=0 Dec 05 07:34:30 crc kubenswrapper[4780]: I1205 07:34:30.078138 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"d829090ba30b01359d015ba9f6b8f803766e657b97a0050b18d00c93ab5bfe06"} Dec 05 07:34:30 crc kubenswrapper[4780]: I1205 07:34:30.078171 4780 scope.go:117] "RemoveContainer" containerID="2c311c16176806d487b125925988e716a3a442d8c148899cf9c86299a0a89477" Dec 05 07:34:31 crc kubenswrapper[4780]: I1205 07:34:31.088252 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a"} Dec 05 07:36:35 crc kubenswrapper[4780]: I1205 07:36:35.903222 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-crl9f"] Dec 05 07:36:35 crc kubenswrapper[4780]: E1205 07:36:35.904113 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b886b0-3256-4969-9fec-9381e3af75dc" containerName="extract-utilities" Dec 05 07:36:35 crc kubenswrapper[4780]: I1205 07:36:35.904128 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b886b0-3256-4969-9fec-9381e3af75dc" containerName="extract-utilities" Dec 05 07:36:35 crc kubenswrapper[4780]: E1205 07:36:35.904144 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b886b0-3256-4969-9fec-9381e3af75dc" containerName="registry-server" Dec 05 07:36:35 crc kubenswrapper[4780]: I1205 07:36:35.904151 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b886b0-3256-4969-9fec-9381e3af75dc" containerName="registry-server" Dec 05 07:36:35 crc kubenswrapper[4780]: E1205 07:36:35.904165 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b886b0-3256-4969-9fec-9381e3af75dc" containerName="extract-content" Dec 05 07:36:35 crc kubenswrapper[4780]: I1205 07:36:35.904172 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b886b0-3256-4969-9fec-9381e3af75dc" containerName="extract-content" Dec 05 07:36:35 crc kubenswrapper[4780]: I1205 07:36:35.904304 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b886b0-3256-4969-9fec-9381e3af75dc" containerName="registry-server" Dec 05 07:36:35 crc kubenswrapper[4780]: I1205 07:36:35.905319 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:35 crc kubenswrapper[4780]: I1205 07:36:35.909706 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crl9f"] Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.048836 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-catalog-content\") pod \"redhat-marketplace-crl9f\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.049257 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zjsj\" (UniqueName: \"kubernetes.io/projected/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-kube-api-access-9zjsj\") pod \"redhat-marketplace-crl9f\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.049286 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-utilities\") pod \"redhat-marketplace-crl9f\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.150059 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-utilities\") pod \"redhat-marketplace-crl9f\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.150134 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-catalog-content\") pod \"redhat-marketplace-crl9f\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.150194 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zjsj\" (UniqueName: \"kubernetes.io/projected/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-kube-api-access-9zjsj\") pod \"redhat-marketplace-crl9f\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.150544 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-utilities\") pod \"redhat-marketplace-crl9f\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.150573 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-catalog-content\") pod \"redhat-marketplace-crl9f\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.169231 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zjsj\" (UniqueName: \"kubernetes.io/projected/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-kube-api-access-9zjsj\") pod \"redhat-marketplace-crl9f\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.226264 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:36 crc kubenswrapper[4780]: I1205 07:36:36.639217 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crl9f"] Dec 05 07:36:37 crc kubenswrapper[4780]: I1205 07:36:37.081472 4780 generic.go:334] "Generic (PLEG): container finished" podID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerID="f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1" exitCode=0 Dec 05 07:36:37 crc kubenswrapper[4780]: I1205 07:36:37.081517 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crl9f" event={"ID":"f1c6cf60-e62d-4387-9ed6-f7ab946395b8","Type":"ContainerDied","Data":"f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1"} Dec 05 07:36:37 crc kubenswrapper[4780]: I1205 07:36:37.081545 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crl9f" event={"ID":"f1c6cf60-e62d-4387-9ed6-f7ab946395b8","Type":"ContainerStarted","Data":"715ec7149acc3c9cb4934151793e21492692e6be5d7bba3100fb42450a0f3f6d"} Dec 05 07:36:37 crc kubenswrapper[4780]: I1205 07:36:37.084666 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 07:36:38 crc kubenswrapper[4780]: I1205 07:36:38.089863 4780 generic.go:334] "Generic (PLEG): container finished" podID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerID="3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e" exitCode=0 Dec 05 07:36:38 crc kubenswrapper[4780]: I1205 07:36:38.089911 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crl9f" event={"ID":"f1c6cf60-e62d-4387-9ed6-f7ab946395b8","Type":"ContainerDied","Data":"3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e"} Dec 05 07:36:39 crc kubenswrapper[4780]: I1205 07:36:39.098677 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crl9f" event={"ID":"f1c6cf60-e62d-4387-9ed6-f7ab946395b8","Type":"ContainerStarted","Data":"4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2"} Dec 05 07:36:39 crc kubenswrapper[4780]: I1205 07:36:39.124307 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-crl9f" podStartSLOduration=2.7188251599999997 podStartE2EDuration="4.124285238s" podCreationTimestamp="2025-12-05 07:36:35 +0000 UTC" firstStartedPulling="2025-12-05 07:36:37.084407582 +0000 UTC m=+3031.153923924" lastFinishedPulling="2025-12-05 07:36:38.48986767 +0000 UTC m=+3032.559384002" observedRunningTime="2025-12-05 07:36:39.117447545 +0000 UTC m=+3033.186963897" watchObservedRunningTime="2025-12-05 07:36:39.124285238 +0000 UTC m=+3033.193801570" Dec 05 07:36:46 crc kubenswrapper[4780]: I1205 07:36:46.227039 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:46 crc kubenswrapper[4780]: I1205 07:36:46.228082 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:46 crc kubenswrapper[4780]: I1205 07:36:46.286436 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:47 crc kubenswrapper[4780]: I1205 07:36:47.211762 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:48 crc kubenswrapper[4780]: I1205 07:36:48.921501 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crl9f"] Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.199322 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-crl9f" podUID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerName="registry-server" containerID="cri-o://4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2" gracePeriod=2 Dec 05 07:36:49 crc kubenswrapper[4780]: E1205 07:36:49.401351 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1c6cf60_e62d_4387_9ed6_f7ab946395b8.slice/crio-4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2.scope\": RecentStats: unable to find data in memory cache]" Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.641968 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.840833 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-utilities\") pod \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.840934 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-catalog-content\") pod \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.841031 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zjsj\" (UniqueName: \"kubernetes.io/projected/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-kube-api-access-9zjsj\") pod \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\" (UID: \"f1c6cf60-e62d-4387-9ed6-f7ab946395b8\") " Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.841968 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-utilities" (OuterVolumeSpecName: "utilities") pod "f1c6cf60-e62d-4387-9ed6-f7ab946395b8" (UID: "f1c6cf60-e62d-4387-9ed6-f7ab946395b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.853816 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-kube-api-access-9zjsj" (OuterVolumeSpecName: "kube-api-access-9zjsj") pod "f1c6cf60-e62d-4387-9ed6-f7ab946395b8" (UID: "f1c6cf60-e62d-4387-9ed6-f7ab946395b8"). InnerVolumeSpecName "kube-api-access-9zjsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.860576 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1c6cf60-e62d-4387-9ed6-f7ab946395b8" (UID: "f1c6cf60-e62d-4387-9ed6-f7ab946395b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.942736 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zjsj\" (UniqueName: \"kubernetes.io/projected/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-kube-api-access-9zjsj\") on node \"crc\" DevicePath \"\"" Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.942773 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:36:49 crc kubenswrapper[4780]: I1205 07:36:49.942783 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c6cf60-e62d-4387-9ed6-f7ab946395b8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.207633 4780 generic.go:334] "Generic (PLEG): container finished" podID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerID="4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2" exitCode=0 Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.207683 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crl9f" event={"ID":"f1c6cf60-e62d-4387-9ed6-f7ab946395b8","Type":"ContainerDied","Data":"4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2"} Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.207687 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crl9f" Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.207708 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crl9f" event={"ID":"f1c6cf60-e62d-4387-9ed6-f7ab946395b8","Type":"ContainerDied","Data":"715ec7149acc3c9cb4934151793e21492692e6be5d7bba3100fb42450a0f3f6d"} Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.207726 4780 scope.go:117] "RemoveContainer" containerID="4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2" Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.227506 4780 scope.go:117] "RemoveContainer" containerID="3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e" Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.231120 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crl9f"] Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.237751 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-crl9f"] Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.243083 4780 scope.go:117] "RemoveContainer" containerID="f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1" Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.285775 4780 scope.go:117] "RemoveContainer" containerID="4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2" Dec 05 07:36:50 crc kubenswrapper[4780]: E1205 07:36:50.286350 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2\": container with ID starting with 4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2 not found: ID does not exist" containerID="4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2" Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.286400 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2"} err="failed to get container status \"4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2\": rpc error: code = NotFound desc = could not find container \"4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2\": container with ID starting with 4b17def6776ffd9305a257afd7cf06389eaf5486099410c43daba9c6991f01a2 not found: ID does not exist" Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.286435 4780 scope.go:117] "RemoveContainer" containerID="3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e" Dec 05 07:36:50 crc kubenswrapper[4780]: E1205 07:36:50.286917 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e\": container with ID starting with 3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e not found: ID does not exist" containerID="3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e" Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.286947 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e"} err="failed to get container status \"3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e\": rpc error: code = NotFound desc = could not find container \"3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e\": container with ID starting with 3192ad0a9b612f4f2e5a3f899ddd230def21ba9d8df9d7652c7cd59d55c4c20e not found: ID does not exist" Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.286977 4780 scope.go:117] "RemoveContainer" containerID="f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1" Dec 05 07:36:50 crc kubenswrapper[4780]: E1205 07:36:50.287331 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1\": container with ID starting with f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1 not found: ID does not exist" containerID="f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1" Dec 05 07:36:50 crc kubenswrapper[4780]: I1205 07:36:50.287364 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1"} err="failed to get container status \"f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1\": rpc error: code = NotFound desc = could not find container \"f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1\": container with ID starting with f163bd003b13d694ec2ea6597668e003cd184befaf3e28578fef7bba2cc98ef1 not found: ID does not exist" Dec 05 07:36:52 crc kubenswrapper[4780]: I1205 07:36:52.148297 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" path="/var/lib/kubelet/pods/f1c6cf60-e62d-4387-9ed6-f7ab946395b8/volumes" Dec 05 07:36:59 crc kubenswrapper[4780]: I1205 07:36:59.907991 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:36:59 crc kubenswrapper[4780]: I1205 07:36:59.908586 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:37:29 crc kubenswrapper[4780]: I1205 07:37:29.907832 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:37:29 crc kubenswrapper[4780]: I1205 07:37:29.908474 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:37:59 crc kubenswrapper[4780]: I1205 07:37:59.907708 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:37:59 crc kubenswrapper[4780]: I1205 07:37:59.908344 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:37:59 crc kubenswrapper[4780]: I1205 07:37:59.908409 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:37:59 crc kubenswrapper[4780]: I1205 07:37:59.909262 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:37:59 crc kubenswrapper[4780]: I1205 07:37:59.909346 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" gracePeriod=600 Dec 05 07:38:00 crc kubenswrapper[4780]: E1205 07:38:00.031676 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:38:00 crc kubenswrapper[4780]: I1205 07:38:00.741615 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" exitCode=0 Dec 05 07:38:00 crc kubenswrapper[4780]: I1205 07:38:00.741679 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a"} Dec 05 07:38:00 crc kubenswrapper[4780]: I1205 07:38:00.741733 4780 scope.go:117] "RemoveContainer" containerID="d829090ba30b01359d015ba9f6b8f803766e657b97a0050b18d00c93ab5bfe06" Dec 05 07:38:00 crc kubenswrapper[4780]: I1205 07:38:00.743425 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:38:00 crc kubenswrapper[4780]: E1205 07:38:00.744103 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:38:15 crc kubenswrapper[4780]: I1205 07:38:15.139403 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:38:15 crc kubenswrapper[4780]: E1205 07:38:15.140396 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:38:26 crc kubenswrapper[4780]: I1205 07:38:26.143908 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:38:26 crc kubenswrapper[4780]: E1205 07:38:26.144821 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:38:40 crc kubenswrapper[4780]: I1205 07:38:40.140035 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:38:40 crc kubenswrapper[4780]: E1205 07:38:40.140599 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:38:51 crc kubenswrapper[4780]: I1205 07:38:51.145566 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:38:51 crc kubenswrapper[4780]: E1205 07:38:51.146744 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:39:05 crc kubenswrapper[4780]: I1205 07:39:05.138752 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:39:05 crc kubenswrapper[4780]: E1205 07:39:05.139680 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:39:18 crc kubenswrapper[4780]: I1205 07:39:18.139611 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:39:18 crc kubenswrapper[4780]: E1205 07:39:18.140477 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.109369 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-58wpl"] Dec 05 07:39:24 crc kubenswrapper[4780]: E1205 07:39:24.110462 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerName="extract-content" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.110482 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerName="extract-content" Dec 05 07:39:24 crc kubenswrapper[4780]: E1205 07:39:24.110510 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerName="registry-server" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.110520 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerName="registry-server" Dec 05 07:39:24 crc kubenswrapper[4780]: E1205 07:39:24.110544 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerName="extract-utilities" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.110554 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerName="extract-utilities" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.110798 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c6cf60-e62d-4387-9ed6-f7ab946395b8" containerName="registry-server" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.112471 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.128826 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58wpl"] Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.250214 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-utilities\") pod \"redhat-operators-58wpl\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.250301 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq86k\" (UniqueName: \"kubernetes.io/projected/7db40a88-8b26-41d8-b6fb-15d62e636331-kube-api-access-pq86k\") pod \"redhat-operators-58wpl\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.250469 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-catalog-content\") pod \"redhat-operators-58wpl\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.352138 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-utilities\") pod \"redhat-operators-58wpl\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.352211 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq86k\" (UniqueName: \"kubernetes.io/projected/7db40a88-8b26-41d8-b6fb-15d62e636331-kube-api-access-pq86k\") pod \"redhat-operators-58wpl\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.352252 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-catalog-content\") pod \"redhat-operators-58wpl\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.352748 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-utilities\") pod \"redhat-operators-58wpl\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.352775 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-catalog-content\") pod \"redhat-operators-58wpl\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.374374 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq86k\" (UniqueName: \"kubernetes.io/projected/7db40a88-8b26-41d8-b6fb-15d62e636331-kube-api-access-pq86k\") pod \"redhat-operators-58wpl\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.443540 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:24 crc kubenswrapper[4780]: I1205 07:39:24.871686 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58wpl"] Dec 05 07:39:25 crc kubenswrapper[4780]: I1205 07:39:25.425261 4780 generic.go:334] "Generic (PLEG): container finished" podID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerID="191f32b3c154285bf05f6dc02842964f83d562555188cb17dbb1dad7e33b324a" exitCode=0 Dec 05 07:39:25 crc kubenswrapper[4780]: I1205 07:39:25.425308 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58wpl" event={"ID":"7db40a88-8b26-41d8-b6fb-15d62e636331","Type":"ContainerDied","Data":"191f32b3c154285bf05f6dc02842964f83d562555188cb17dbb1dad7e33b324a"} Dec 05 07:39:25 crc kubenswrapper[4780]: I1205 07:39:25.425534 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58wpl" event={"ID":"7db40a88-8b26-41d8-b6fb-15d62e636331","Type":"ContainerStarted","Data":"5dbe1dad3c0e200f1ae885d3229121692d33cd61d4a229e1611f074e8ab6d8d6"} Dec 05 07:39:26 crc kubenswrapper[4780]: I1205 07:39:26.434147 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58wpl" event={"ID":"7db40a88-8b26-41d8-b6fb-15d62e636331","Type":"ContainerStarted","Data":"dac0a10a500ce34b44df97ef70510f77a7156ad5ea5906e3269f66fbd1f44964"} Dec 05 07:39:27 crc kubenswrapper[4780]: I1205 07:39:27.441552 4780 generic.go:334] "Generic (PLEG): container finished" podID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerID="dac0a10a500ce34b44df97ef70510f77a7156ad5ea5906e3269f66fbd1f44964" exitCode=0 Dec 05 07:39:27 crc kubenswrapper[4780]: I1205 07:39:27.441598 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58wpl" event={"ID":"7db40a88-8b26-41d8-b6fb-15d62e636331","Type":"ContainerDied","Data":"dac0a10a500ce34b44df97ef70510f77a7156ad5ea5906e3269f66fbd1f44964"} Dec 05 07:39:28 crc kubenswrapper[4780]: I1205 07:39:28.455564 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58wpl" event={"ID":"7db40a88-8b26-41d8-b6fb-15d62e636331","Type":"ContainerStarted","Data":"309e06771c00264f9839e37383827e1e88ca93f9930db1ca855df233efdd04ed"} Dec 05 07:39:28 crc kubenswrapper[4780]: I1205 07:39:28.480448 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-58wpl" podStartSLOduration=2.11024433 podStartE2EDuration="4.480428661s" podCreationTimestamp="2025-12-05 07:39:24 +0000 UTC" firstStartedPulling="2025-12-05 07:39:25.427513406 +0000 UTC m=+3199.497029748" lastFinishedPulling="2025-12-05 07:39:27.797697747 +0000 UTC m=+3201.867214079" observedRunningTime="2025-12-05 07:39:28.478451697 +0000 UTC m=+3202.547968109" watchObservedRunningTime="2025-12-05 07:39:28.480428661 +0000 UTC m=+3202.549944983" Dec 05 07:39:30 crc kubenswrapper[4780]: I1205 07:39:30.140452 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:39:30 crc kubenswrapper[4780]: E1205 07:39:30.140920 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:39:34 crc kubenswrapper[4780]: I1205 07:39:34.444441 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:34 crc kubenswrapper[4780]: I1205 07:39:34.444838 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:34 crc kubenswrapper[4780]: I1205 07:39:34.490389 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:34 crc kubenswrapper[4780]: I1205 07:39:34.540929 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:34 crc kubenswrapper[4780]: I1205 07:39:34.730972 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58wpl"] Dec 05 07:39:36 crc kubenswrapper[4780]: I1205 07:39:36.509263 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-58wpl" podUID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerName="registry-server" containerID="cri-o://309e06771c00264f9839e37383827e1e88ca93f9930db1ca855df233efdd04ed" gracePeriod=2 Dec 05 07:39:38 crc kubenswrapper[4780]: I1205 07:39:38.526340 4780 generic.go:334] "Generic (PLEG): container finished" podID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerID="309e06771c00264f9839e37383827e1e88ca93f9930db1ca855df233efdd04ed" exitCode=0 Dec 05 07:39:38 crc kubenswrapper[4780]: I1205 07:39:38.526464 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58wpl" event={"ID":"7db40a88-8b26-41d8-b6fb-15d62e636331","Type":"ContainerDied","Data":"309e06771c00264f9839e37383827e1e88ca93f9930db1ca855df233efdd04ed"} Dec 05 07:39:38 crc kubenswrapper[4780]: I1205 07:39:38.882801 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.076490 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-catalog-content\") pod \"7db40a88-8b26-41d8-b6fb-15d62e636331\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.076625 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-utilities\") pod \"7db40a88-8b26-41d8-b6fb-15d62e636331\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.076686 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq86k\" (UniqueName: \"kubernetes.io/projected/7db40a88-8b26-41d8-b6fb-15d62e636331-kube-api-access-pq86k\") pod \"7db40a88-8b26-41d8-b6fb-15d62e636331\" (UID: \"7db40a88-8b26-41d8-b6fb-15d62e636331\") " Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.077740 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-utilities" (OuterVolumeSpecName: "utilities") pod "7db40a88-8b26-41d8-b6fb-15d62e636331" (UID: "7db40a88-8b26-41d8-b6fb-15d62e636331"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.083194 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db40a88-8b26-41d8-b6fb-15d62e636331-kube-api-access-pq86k" (OuterVolumeSpecName: "kube-api-access-pq86k") pod "7db40a88-8b26-41d8-b6fb-15d62e636331" (UID: "7db40a88-8b26-41d8-b6fb-15d62e636331"). InnerVolumeSpecName "kube-api-access-pq86k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.178785 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.179317 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq86k\" (UniqueName: \"kubernetes.io/projected/7db40a88-8b26-41d8-b6fb-15d62e636331-kube-api-access-pq86k\") on node \"crc\" DevicePath \"\"" Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.182300 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7db40a88-8b26-41d8-b6fb-15d62e636331" (UID: "7db40a88-8b26-41d8-b6fb-15d62e636331"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.281767 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db40a88-8b26-41d8-b6fb-15d62e636331-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.539675 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58wpl" event={"ID":"7db40a88-8b26-41d8-b6fb-15d62e636331","Type":"ContainerDied","Data":"5dbe1dad3c0e200f1ae885d3229121692d33cd61d4a229e1611f074e8ab6d8d6"} Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.539740 4780 scope.go:117] "RemoveContainer" containerID="309e06771c00264f9839e37383827e1e88ca93f9930db1ca855df233efdd04ed" Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.539771 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58wpl" Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.571173 4780 scope.go:117] "RemoveContainer" containerID="dac0a10a500ce34b44df97ef70510f77a7156ad5ea5906e3269f66fbd1f44964" Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.576168 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58wpl"] Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.581554 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-58wpl"] Dec 05 07:39:39 crc kubenswrapper[4780]: I1205 07:39:39.604611 4780 scope.go:117] "RemoveContainer" containerID="191f32b3c154285bf05f6dc02842964f83d562555188cb17dbb1dad7e33b324a" Dec 05 07:39:40 crc kubenswrapper[4780]: I1205 07:39:40.152066 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db40a88-8b26-41d8-b6fb-15d62e636331" path="/var/lib/kubelet/pods/7db40a88-8b26-41d8-b6fb-15d62e636331/volumes" Dec 05 07:39:44 crc kubenswrapper[4780]: I1205 07:39:44.138164 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:39:44 crc kubenswrapper[4780]: E1205 07:39:44.138617 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:39:58 crc kubenswrapper[4780]: I1205 07:39:58.138342 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:39:58 crc kubenswrapper[4780]: E1205 07:39:58.139228 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:40:10 crc kubenswrapper[4780]: I1205 07:40:10.141906 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:40:10 crc kubenswrapper[4780]: E1205 07:40:10.142629 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:40:22 crc kubenswrapper[4780]: I1205 07:40:22.139819 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:40:22 crc kubenswrapper[4780]: E1205 07:40:22.140668 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:40:35 crc kubenswrapper[4780]: I1205 07:40:35.138872 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:40:35 crc kubenswrapper[4780]: E1205 07:40:35.139560 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:40:50 crc kubenswrapper[4780]: I1205 07:40:50.164857 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:40:50 crc kubenswrapper[4780]: E1205 07:40:50.165663 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:41:05 crc kubenswrapper[4780]: I1205 07:41:05.138647 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:41:05 crc kubenswrapper[4780]: E1205 07:41:05.139514 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:41:20 crc kubenswrapper[4780]: I1205 07:41:20.139136 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:41:20 crc kubenswrapper[4780]: E1205 07:41:20.140729 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:41:32 crc kubenswrapper[4780]: I1205 07:41:32.138996 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:41:32 crc kubenswrapper[4780]: E1205 07:41:32.139857 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:41:44 crc kubenswrapper[4780]: I1205 07:41:44.138840 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:41:44 crc kubenswrapper[4780]: E1205 07:41:44.139534 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:41:57 crc kubenswrapper[4780]: I1205 07:41:57.138533 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:41:57 crc kubenswrapper[4780]: E1205 07:41:57.139284 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:42:08 crc kubenswrapper[4780]: I1205 07:42:08.138415 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:42:08 crc kubenswrapper[4780]: E1205 07:42:08.139191 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:42:19 crc kubenswrapper[4780]: I1205 07:42:19.138792 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:42:19 crc kubenswrapper[4780]: E1205 07:42:19.139462 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:42:33 crc kubenswrapper[4780]: I1205 07:42:33.138769 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:42:33 crc kubenswrapper[4780]: E1205 07:42:33.139508 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:42:44 crc kubenswrapper[4780]: I1205 07:42:44.138664 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:42:44 crc kubenswrapper[4780]: E1205 07:42:44.139441 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.712345 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jk5s9"] Dec 05 07:42:46 crc kubenswrapper[4780]: E1205 07:42:46.712980 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerName="extract-content" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.712994 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerName="extract-content" Dec 05 07:42:46 crc kubenswrapper[4780]: E1205 07:42:46.713011 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerName="extract-utilities" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.713017 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerName="extract-utilities" Dec 05 07:42:46 crc kubenswrapper[4780]: E1205 07:42:46.713044 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerName="registry-server" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.713050 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerName="registry-server" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.713209 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db40a88-8b26-41d8-b6fb-15d62e636331" containerName="registry-server" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.714242 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.722490 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jk5s9"] Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.805620 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxj6k\" (UniqueName: \"kubernetes.io/projected/6b893336-0ced-4382-882d-23c7b2cff812-kube-api-access-mxj6k\") pod \"certified-operators-jk5s9\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.805748 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-catalog-content\") pod \"certified-operators-jk5s9\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.805803 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-utilities\") pod \"certified-operators-jk5s9\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.907509 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxj6k\" (UniqueName: \"kubernetes.io/projected/6b893336-0ced-4382-882d-23c7b2cff812-kube-api-access-mxj6k\") pod \"certified-operators-jk5s9\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.907572 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-catalog-content\") pod \"certified-operators-jk5s9\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.907595 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-utilities\") pod \"certified-operators-jk5s9\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.908124 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-catalog-content\") pod \"certified-operators-jk5s9\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.908168 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-utilities\") pod \"certified-operators-jk5s9\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:46 crc kubenswrapper[4780]: I1205 07:42:46.935112 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxj6k\" (UniqueName: \"kubernetes.io/projected/6b893336-0ced-4382-882d-23c7b2cff812-kube-api-access-mxj6k\") pod \"certified-operators-jk5s9\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:47 crc kubenswrapper[4780]: I1205 07:42:47.031237 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:47 crc kubenswrapper[4780]: I1205 07:42:47.520665 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jk5s9"] Dec 05 07:42:47 crc kubenswrapper[4780]: I1205 07:42:47.922872 4780 generic.go:334] "Generic (PLEG): container finished" podID="6b893336-0ced-4382-882d-23c7b2cff812" containerID="3df898a5a5ce9be94b4ad996e6c2e427ad5df98b7f4d28ada2f39f2344874276" exitCode=0 Dec 05 07:42:47 crc kubenswrapper[4780]: I1205 07:42:47.922989 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk5s9" event={"ID":"6b893336-0ced-4382-882d-23c7b2cff812","Type":"ContainerDied","Data":"3df898a5a5ce9be94b4ad996e6c2e427ad5df98b7f4d28ada2f39f2344874276"} Dec 05 07:42:47 crc kubenswrapper[4780]: I1205 07:42:47.923227 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk5s9" event={"ID":"6b893336-0ced-4382-882d-23c7b2cff812","Type":"ContainerStarted","Data":"01f681594992b59d0a8d3d7b243b1c753c0f5f8c314977002d005aef6d534438"} Dec 05 07:42:47 crc kubenswrapper[4780]: I1205 07:42:47.924494 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 07:42:48 crc kubenswrapper[4780]: I1205 07:42:48.932912 4780 generic.go:334] "Generic (PLEG): container finished" podID="6b893336-0ced-4382-882d-23c7b2cff812" containerID="bbaf37db00b35b8c947a3e6711e06220a3a39817dcf207767116267f0fdfa245" exitCode=0 Dec 05 07:42:48 crc kubenswrapper[4780]: I1205 07:42:48.932991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk5s9" event={"ID":"6b893336-0ced-4382-882d-23c7b2cff812","Type":"ContainerDied","Data":"bbaf37db00b35b8c947a3e6711e06220a3a39817dcf207767116267f0fdfa245"} Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.114688 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tpckl"] Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.116483 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.127181 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpckl"] Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.242810 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lr6\" (UniqueName: \"kubernetes.io/projected/be4132c1-ebd3-436c-97c8-6144b8f41918-kube-api-access-77lr6\") pod \"community-operators-tpckl\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.243062 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-catalog-content\") pod \"community-operators-tpckl\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.243238 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-utilities\") pod \"community-operators-tpckl\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.345003 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lr6\" (UniqueName: \"kubernetes.io/projected/be4132c1-ebd3-436c-97c8-6144b8f41918-kube-api-access-77lr6\") pod \"community-operators-tpckl\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.345105 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-catalog-content\") pod \"community-operators-tpckl\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.345155 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-utilities\") pod \"community-operators-tpckl\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.345620 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-utilities\") pod \"community-operators-tpckl\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.345821 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-catalog-content\") pod \"community-operators-tpckl\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.363715 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lr6\" (UniqueName: \"kubernetes.io/projected/be4132c1-ebd3-436c-97c8-6144b8f41918-kube-api-access-77lr6\") pod \"community-operators-tpckl\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.445292 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:49 crc kubenswrapper[4780]: W1205 07:42:49.925533 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4132c1_ebd3_436c_97c8_6144b8f41918.slice/crio-f4b920ee426c87703557fe0642d00cb201153756b99da934a2265743fccb9ba4 WatchSource:0}: Error finding container f4b920ee426c87703557fe0642d00cb201153756b99da934a2265743fccb9ba4: Status 404 returned error can't find the container with id f4b920ee426c87703557fe0642d00cb201153756b99da934a2265743fccb9ba4 Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.927288 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpckl"] Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.941413 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk5s9" event={"ID":"6b893336-0ced-4382-882d-23c7b2cff812","Type":"ContainerStarted","Data":"47c25bd7591f1124717837586a555f75bea861aa110b086eaeba99a52b84219c"} Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.942324 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpckl" event={"ID":"be4132c1-ebd3-436c-97c8-6144b8f41918","Type":"ContainerStarted","Data":"f4b920ee426c87703557fe0642d00cb201153756b99da934a2265743fccb9ba4"} Dec 05 07:42:49 crc kubenswrapper[4780]: I1205 07:42:49.965179 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jk5s9" podStartSLOduration=2.494922827 podStartE2EDuration="3.965163546s" podCreationTimestamp="2025-12-05 07:42:46 +0000 UTC" firstStartedPulling="2025-12-05 07:42:47.924288337 +0000 UTC m=+3401.993804669" lastFinishedPulling="2025-12-05 07:42:49.394529066 +0000 UTC m=+3403.464045388" observedRunningTime="2025-12-05 07:42:49.963399719 +0000 UTC m=+3404.032916061" watchObservedRunningTime="2025-12-05 07:42:49.965163546 +0000 UTC m=+3404.034679878" Dec 05 07:42:50 crc kubenswrapper[4780]: I1205 07:42:50.949931 4780 generic.go:334] "Generic (PLEG): container finished" podID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerID="17d056e39e61fe4e93a5c5b59829f4721545672a9d6e283bb449b6fe37ad2e58" exitCode=0 Dec 05 07:42:50 crc kubenswrapper[4780]: I1205 07:42:50.950487 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpckl" event={"ID":"be4132c1-ebd3-436c-97c8-6144b8f41918","Type":"ContainerDied","Data":"17d056e39e61fe4e93a5c5b59829f4721545672a9d6e283bb449b6fe37ad2e58"} Dec 05 07:42:51 crc kubenswrapper[4780]: I1205 07:42:51.960472 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpckl" event={"ID":"be4132c1-ebd3-436c-97c8-6144b8f41918","Type":"ContainerStarted","Data":"a45e5cca225828cf242d318839b3b8f1dec744e431f15725d71d0198b17cd94a"} Dec 05 07:42:52 crc kubenswrapper[4780]: I1205 07:42:52.968987 4780 generic.go:334] "Generic (PLEG): container finished" podID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerID="a45e5cca225828cf242d318839b3b8f1dec744e431f15725d71d0198b17cd94a" exitCode=0 Dec 05 07:42:52 crc kubenswrapper[4780]: I1205 07:42:52.969025 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpckl" event={"ID":"be4132c1-ebd3-436c-97c8-6144b8f41918","Type":"ContainerDied","Data":"a45e5cca225828cf242d318839b3b8f1dec744e431f15725d71d0198b17cd94a"} Dec 05 07:42:53 crc kubenswrapper[4780]: I1205 07:42:53.980407 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpckl" event={"ID":"be4132c1-ebd3-436c-97c8-6144b8f41918","Type":"ContainerStarted","Data":"b5f978f9042ce9ef7552dda0aa4ef061bb0f9446668501883e8861b52af0e31c"} Dec 05 07:42:53 crc kubenswrapper[4780]: I1205 07:42:53.996736 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tpckl" podStartSLOduration=2.477612966 podStartE2EDuration="4.996708679s" podCreationTimestamp="2025-12-05 07:42:49 +0000 UTC" firstStartedPulling="2025-12-05 07:42:50.951704004 +0000 UTC m=+3405.021220336" lastFinishedPulling="2025-12-05 07:42:53.470799707 +0000 UTC m=+3407.540316049" observedRunningTime="2025-12-05 07:42:53.995741583 +0000 UTC m=+3408.065257935" watchObservedRunningTime="2025-12-05 07:42:53.996708679 +0000 UTC m=+3408.066225031" Dec 05 07:42:56 crc kubenswrapper[4780]: I1205 07:42:56.144095 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:42:56 crc kubenswrapper[4780]: E1205 07:42:56.144498 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:42:57 crc kubenswrapper[4780]: I1205 07:42:57.032559 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:57 crc kubenswrapper[4780]: I1205 07:42:57.033180 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:57 crc kubenswrapper[4780]: I1205 07:42:57.071941 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:58 crc kubenswrapper[4780]: I1205 07:42:58.045777 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:42:58 crc kubenswrapper[4780]: I1205 07:42:58.101962 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jk5s9"] Dec 05 07:42:59 crc kubenswrapper[4780]: I1205 07:42:59.445980 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:59 crc kubenswrapper[4780]: I1205 07:42:59.446385 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:42:59 crc kubenswrapper[4780]: I1205 07:42:59.487495 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:43:00 crc kubenswrapper[4780]: I1205 07:43:00.016639 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jk5s9" podUID="6b893336-0ced-4382-882d-23c7b2cff812" containerName="registry-server" containerID="cri-o://47c25bd7591f1124717837586a555f75bea861aa110b086eaeba99a52b84219c" gracePeriod=2 Dec 05 07:43:00 crc kubenswrapper[4780]: I1205 07:43:00.055437 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.024383 4780 generic.go:334] "Generic (PLEG): container finished" podID="6b893336-0ced-4382-882d-23c7b2cff812" containerID="47c25bd7591f1124717837586a555f75bea861aa110b086eaeba99a52b84219c" exitCode=0 Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.024460 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk5s9" event={"ID":"6b893336-0ced-4382-882d-23c7b2cff812","Type":"ContainerDied","Data":"47c25bd7591f1124717837586a555f75bea861aa110b086eaeba99a52b84219c"} Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.466387 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.646825 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxj6k\" (UniqueName: \"kubernetes.io/projected/6b893336-0ced-4382-882d-23c7b2cff812-kube-api-access-mxj6k\") pod \"6b893336-0ced-4382-882d-23c7b2cff812\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.646914 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-utilities\") pod \"6b893336-0ced-4382-882d-23c7b2cff812\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.646949 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-catalog-content\") pod \"6b893336-0ced-4382-882d-23c7b2cff812\" (UID: \"6b893336-0ced-4382-882d-23c7b2cff812\") " Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.647985 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-utilities" (OuterVolumeSpecName: "utilities") pod "6b893336-0ced-4382-882d-23c7b2cff812" (UID: "6b893336-0ced-4382-882d-23c7b2cff812"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.653600 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b893336-0ced-4382-882d-23c7b2cff812-kube-api-access-mxj6k" (OuterVolumeSpecName: "kube-api-access-mxj6k") pod "6b893336-0ced-4382-882d-23c7b2cff812" (UID: "6b893336-0ced-4382-882d-23c7b2cff812"). InnerVolumeSpecName "kube-api-access-mxj6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.693085 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b893336-0ced-4382-882d-23c7b2cff812" (UID: "6b893336-0ced-4382-882d-23c7b2cff812"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.748658 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxj6k\" (UniqueName: \"kubernetes.io/projected/6b893336-0ced-4382-882d-23c7b2cff812-kube-api-access-mxj6k\") on node \"crc\" DevicePath \"\"" Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.748700 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:43:01 crc kubenswrapper[4780]: I1205 07:43:01.748715 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b893336-0ced-4382-882d-23c7b2cff812-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:43:02 crc kubenswrapper[4780]: I1205 07:43:02.031697 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk5s9" event={"ID":"6b893336-0ced-4382-882d-23c7b2cff812","Type":"ContainerDied","Data":"01f681594992b59d0a8d3d7b243b1c753c0f5f8c314977002d005aef6d534438"} Dec 05 07:43:02 crc kubenswrapper[4780]: I1205 07:43:02.031741 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk5s9" Dec 05 07:43:02 crc kubenswrapper[4780]: I1205 07:43:02.031751 4780 scope.go:117] "RemoveContainer" containerID="47c25bd7591f1124717837586a555f75bea861aa110b086eaeba99a52b84219c" Dec 05 07:43:02 crc kubenswrapper[4780]: I1205 07:43:02.048289 4780 scope.go:117] "RemoveContainer" containerID="bbaf37db00b35b8c947a3e6711e06220a3a39817dcf207767116267f0fdfa245" Dec 05 07:43:02 crc kubenswrapper[4780]: I1205 07:43:02.060572 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jk5s9"] Dec 05 07:43:02 crc kubenswrapper[4780]: I1205 07:43:02.065704 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jk5s9"] Dec 05 07:43:02 crc kubenswrapper[4780]: I1205 07:43:02.078676 4780 scope.go:117] "RemoveContainer" containerID="3df898a5a5ce9be94b4ad996e6c2e427ad5df98b7f4d28ada2f39f2344874276" Dec 05 07:43:02 crc kubenswrapper[4780]: I1205 07:43:02.148022 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b893336-0ced-4382-882d-23c7b2cff812" path="/var/lib/kubelet/pods/6b893336-0ced-4382-882d-23c7b2cff812/volumes" Dec 05 07:43:03 crc kubenswrapper[4780]: I1205 07:43:03.505937 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tpckl"] Dec 05 07:43:03 crc kubenswrapper[4780]: I1205 07:43:03.506619 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tpckl" podUID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerName="registry-server" containerID="cri-o://b5f978f9042ce9ef7552dda0aa4ef061bb0f9446668501883e8861b52af0e31c" gracePeriod=2 Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.052400 4780 generic.go:334] "Generic (PLEG): container finished" podID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerID="b5f978f9042ce9ef7552dda0aa4ef061bb0f9446668501883e8861b52af0e31c" exitCode=0 Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.052437 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpckl" event={"ID":"be4132c1-ebd3-436c-97c8-6144b8f41918","Type":"ContainerDied","Data":"b5f978f9042ce9ef7552dda0aa4ef061bb0f9446668501883e8861b52af0e31c"} Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.404864 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.488867 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-catalog-content\") pod \"be4132c1-ebd3-436c-97c8-6144b8f41918\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.488986 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-utilities\") pod \"be4132c1-ebd3-436c-97c8-6144b8f41918\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.489072 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77lr6\" (UniqueName: \"kubernetes.io/projected/be4132c1-ebd3-436c-97c8-6144b8f41918-kube-api-access-77lr6\") pod \"be4132c1-ebd3-436c-97c8-6144b8f41918\" (UID: \"be4132c1-ebd3-436c-97c8-6144b8f41918\") " Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.490074 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-utilities" (OuterVolumeSpecName: "utilities") pod "be4132c1-ebd3-436c-97c8-6144b8f41918" (UID: "be4132c1-ebd3-436c-97c8-6144b8f41918"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.506301 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4132c1-ebd3-436c-97c8-6144b8f41918-kube-api-access-77lr6" (OuterVolumeSpecName: "kube-api-access-77lr6") pod "be4132c1-ebd3-436c-97c8-6144b8f41918" (UID: "be4132c1-ebd3-436c-97c8-6144b8f41918"). InnerVolumeSpecName "kube-api-access-77lr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.538907 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be4132c1-ebd3-436c-97c8-6144b8f41918" (UID: "be4132c1-ebd3-436c-97c8-6144b8f41918"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.590438 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.590480 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77lr6\" (UniqueName: \"kubernetes.io/projected/be4132c1-ebd3-436c-97c8-6144b8f41918-kube-api-access-77lr6\") on node \"crc\" DevicePath \"\"" Dec 05 07:43:04 crc kubenswrapper[4780]: I1205 07:43:04.590493 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be4132c1-ebd3-436c-97c8-6144b8f41918-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:43:05 crc kubenswrapper[4780]: I1205 07:43:05.060336 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpckl" event={"ID":"be4132c1-ebd3-436c-97c8-6144b8f41918","Type":"ContainerDied","Data":"f4b920ee426c87703557fe0642d00cb201153756b99da934a2265743fccb9ba4"} Dec 05 07:43:05 crc kubenswrapper[4780]: I1205 07:43:05.060407 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpckl" Dec 05 07:43:05 crc kubenswrapper[4780]: I1205 07:43:05.060697 4780 scope.go:117] "RemoveContainer" containerID="b5f978f9042ce9ef7552dda0aa4ef061bb0f9446668501883e8861b52af0e31c" Dec 05 07:43:05 crc kubenswrapper[4780]: I1205 07:43:05.102925 4780 scope.go:117] "RemoveContainer" containerID="a45e5cca225828cf242d318839b3b8f1dec744e431f15725d71d0198b17cd94a" Dec 05 07:43:05 crc kubenswrapper[4780]: I1205 07:43:05.104208 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tpckl"] Dec 05 07:43:05 crc kubenswrapper[4780]: I1205 07:43:05.109659 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tpckl"] Dec 05 07:43:05 crc kubenswrapper[4780]: I1205 07:43:05.123220 4780 scope.go:117] "RemoveContainer" containerID="17d056e39e61fe4e93a5c5b59829f4721545672a9d6e283bb449b6fe37ad2e58" Dec 05 07:43:06 crc kubenswrapper[4780]: I1205 07:43:06.147716 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be4132c1-ebd3-436c-97c8-6144b8f41918" path="/var/lib/kubelet/pods/be4132c1-ebd3-436c-97c8-6144b8f41918/volumes" Dec 05 07:43:10 crc kubenswrapper[4780]: I1205 07:43:10.138912 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:43:11 crc kubenswrapper[4780]: I1205 07:43:11.119563 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"50108852f61d5a0d7495b72c6eef39cad56df49bc32160c0df7f74fd3f12987b"} Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.148432 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp"] Dec 05 07:45:00 crc kubenswrapper[4780]: E1205 07:45:00.150169 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerName="registry-server" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.150187 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerName="registry-server" Dec 05 07:45:00 crc kubenswrapper[4780]: E1205 07:45:00.150210 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b893336-0ced-4382-882d-23c7b2cff812" containerName="extract-content" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.150218 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b893336-0ced-4382-882d-23c7b2cff812" containerName="extract-content" Dec 05 07:45:00 crc kubenswrapper[4780]: E1205 07:45:00.150236 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerName="extract-utilities" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.150244 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerName="extract-utilities" Dec 05 07:45:00 crc kubenswrapper[4780]: E1205 07:45:00.150255 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b893336-0ced-4382-882d-23c7b2cff812" containerName="extract-utilities" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.150264 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b893336-0ced-4382-882d-23c7b2cff812" containerName="extract-utilities" Dec 05 07:45:00 crc kubenswrapper[4780]: E1205 07:45:00.150279 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerName="extract-content" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.150287 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerName="extract-content" Dec 05 07:45:00 crc kubenswrapper[4780]: E1205 07:45:00.150298 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b893336-0ced-4382-882d-23c7b2cff812" containerName="registry-server" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.150304 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b893336-0ced-4382-882d-23c7b2cff812" containerName="registry-server" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.150478 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b893336-0ced-4382-882d-23c7b2cff812" containerName="registry-server" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.150492 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4132c1-ebd3-436c-97c8-6144b8f41918" containerName="registry-server" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.151092 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.153210 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.153645 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.155500 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp"] Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.190854 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lz2x\" (UniqueName: \"kubernetes.io/projected/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-kube-api-access-6lz2x\") pod \"collect-profiles-29415345-ht4gp\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.190923 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-config-volume\") pod \"collect-profiles-29415345-ht4gp\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.191000 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-secret-volume\") pod \"collect-profiles-29415345-ht4gp\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.291646 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-config-volume\") pod \"collect-profiles-29415345-ht4gp\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.291736 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-secret-volume\") pod \"collect-profiles-29415345-ht4gp\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.292552 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-config-volume\") pod \"collect-profiles-29415345-ht4gp\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.291949 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lz2x\" (UniqueName: \"kubernetes.io/projected/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-kube-api-access-6lz2x\") pod \"collect-profiles-29415345-ht4gp\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.297157 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-secret-volume\") pod \"collect-profiles-29415345-ht4gp\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.308466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lz2x\" (UniqueName: \"kubernetes.io/projected/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-kube-api-access-6lz2x\") pod \"collect-profiles-29415345-ht4gp\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.477630 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.890476 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp"] Dec 05 07:45:00 crc kubenswrapper[4780]: I1205 07:45:00.959990 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" event={"ID":"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb","Type":"ContainerStarted","Data":"fb7de84e1d921e85c6b7e62228a82b45ef1d151ab6387db3058d647185fc1753"} Dec 05 07:45:01 crc kubenswrapper[4780]: I1205 07:45:01.968091 4780 generic.go:334] "Generic (PLEG): container finished" podID="aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb" containerID="005673e271a424c05a0b1529cfbae487ba8547b4cd2636e57ccb53c5ae2ac80f" exitCode=0 Dec 05 07:45:01 crc kubenswrapper[4780]: I1205 07:45:01.968278 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" event={"ID":"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb","Type":"ContainerDied","Data":"005673e271a424c05a0b1529cfbae487ba8547b4cd2636e57ccb53c5ae2ac80f"} Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.243188 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.430774 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-config-volume\") pod \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.430916 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lz2x\" (UniqueName: \"kubernetes.io/projected/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-kube-api-access-6lz2x\") pod \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.430980 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-secret-volume\") pod \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\" (UID: \"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb\") " Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.431605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-config-volume" (OuterVolumeSpecName: "config-volume") pod "aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb" (UID: "aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.436035 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb" (UID: "aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.436053 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-kube-api-access-6lz2x" (OuterVolumeSpecName: "kube-api-access-6lz2x") pod "aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb" (UID: "aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb"). InnerVolumeSpecName "kube-api-access-6lz2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.532685 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.532735 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lz2x\" (UniqueName: \"kubernetes.io/projected/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-kube-api-access-6lz2x\") on node \"crc\" DevicePath \"\"" Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.532748 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.982706 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" event={"ID":"aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb","Type":"ContainerDied","Data":"fb7de84e1d921e85c6b7e62228a82b45ef1d151ab6387db3058d647185fc1753"} Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.983059 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7de84e1d921e85c6b7e62228a82b45ef1d151ab6387db3058d647185fc1753" Dec 05 07:45:03 crc kubenswrapper[4780]: I1205 07:45:03.982755 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp" Dec 05 07:45:04 crc kubenswrapper[4780]: I1205 07:45:04.313155 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh"] Dec 05 07:45:04 crc kubenswrapper[4780]: I1205 07:45:04.319611 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415300-v98mh"] Dec 05 07:45:06 crc kubenswrapper[4780]: I1205 07:45:06.146932 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03" path="/var/lib/kubelet/pods/a5ea633e-b2ae-4ba0-87d7-bdcdf5dd9d03/volumes" Dec 05 07:45:24 crc kubenswrapper[4780]: I1205 07:45:24.266151 4780 scope.go:117] "RemoveContainer" containerID="6d4ee30a0f4f08d9ef1f387ef1c173cc26a5bf177568b67b91cc135bb419e4d1" Dec 05 07:45:29 crc kubenswrapper[4780]: I1205 07:45:29.908031 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:45:29 crc kubenswrapper[4780]: I1205 07:45:29.908656 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:45:59 crc kubenswrapper[4780]: I1205 07:45:59.908184 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:45:59 crc kubenswrapper[4780]: I1205 07:45:59.908849 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:46:29 crc kubenswrapper[4780]: I1205 07:46:29.907379 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:46:29 crc kubenswrapper[4780]: I1205 07:46:29.908094 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:46:29 crc kubenswrapper[4780]: I1205 07:46:29.908143 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:46:29 crc kubenswrapper[4780]: I1205 07:46:29.908828 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50108852f61d5a0d7495b72c6eef39cad56df49bc32160c0df7f74fd3f12987b"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:46:29 crc kubenswrapper[4780]: I1205 07:46:29.908957 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://50108852f61d5a0d7495b72c6eef39cad56df49bc32160c0df7f74fd3f12987b" gracePeriod=600 Dec 05 07:46:30 crc kubenswrapper[4780]: E1205 07:46:30.020601 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda640087b_e493_4ac1_bef1_a9c05dd7c0ad.slice/crio-conmon-50108852f61d5a0d7495b72c6eef39cad56df49bc32160c0df7f74fd3f12987b.scope\": RecentStats: unable to find data in memory cache]" Dec 05 07:46:30 crc kubenswrapper[4780]: I1205 07:46:30.653574 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="50108852f61d5a0d7495b72c6eef39cad56df49bc32160c0df7f74fd3f12987b" exitCode=0 Dec 05 07:46:30 crc kubenswrapper[4780]: I1205 07:46:30.653646 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"50108852f61d5a0d7495b72c6eef39cad56df49bc32160c0df7f74fd3f12987b"} Dec 05 07:46:30 crc kubenswrapper[4780]: I1205 07:46:30.653851 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4"} Dec 05 07:46:30 crc kubenswrapper[4780]: I1205 07:46:30.653874 4780 scope.go:117] "RemoveContainer" containerID="f7061f8f7b43d7cf7c3c138d1d0f49e65fdeb29cf6dcd990474ef2e901bd839a" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.109666 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ngwmh"] Dec 05 07:46:58 crc kubenswrapper[4780]: E1205 07:46:58.110530 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb" containerName="collect-profiles" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.110544 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb" containerName="collect-profiles" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.110706 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb" containerName="collect-profiles" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.111666 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.128560 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngwmh"] Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.210981 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-catalog-content\") pod \"redhat-marketplace-ngwmh\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.211039 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvdr\" (UniqueName: \"kubernetes.io/projected/ad2b02df-773a-467d-8c04-19f60caf91df-kube-api-access-tzvdr\") pod \"redhat-marketplace-ngwmh\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.211080 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-utilities\") pod \"redhat-marketplace-ngwmh\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.311840 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-utilities\") pod \"redhat-marketplace-ngwmh\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.311987 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-catalog-content\") pod \"redhat-marketplace-ngwmh\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.312017 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzvdr\" (UniqueName: \"kubernetes.io/projected/ad2b02df-773a-467d-8c04-19f60caf91df-kube-api-access-tzvdr\") pod \"redhat-marketplace-ngwmh\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.312364 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-utilities\") pod \"redhat-marketplace-ngwmh\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.312551 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-catalog-content\") pod \"redhat-marketplace-ngwmh\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.332733 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzvdr\" (UniqueName: \"kubernetes.io/projected/ad2b02df-773a-467d-8c04-19f60caf91df-kube-api-access-tzvdr\") pod \"redhat-marketplace-ngwmh\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.435330 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:46:58 crc kubenswrapper[4780]: I1205 07:46:58.903196 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngwmh"] Dec 05 07:46:59 crc kubenswrapper[4780]: I1205 07:46:59.867414 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad2b02df-773a-467d-8c04-19f60caf91df" containerID="e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2" exitCode=0 Dec 05 07:46:59 crc kubenswrapper[4780]: I1205 07:46:59.867459 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngwmh" event={"ID":"ad2b02df-773a-467d-8c04-19f60caf91df","Type":"ContainerDied","Data":"e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2"} Dec 05 07:46:59 crc kubenswrapper[4780]: I1205 07:46:59.867724 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngwmh" event={"ID":"ad2b02df-773a-467d-8c04-19f60caf91df","Type":"ContainerStarted","Data":"75d5f5ca87ee5f0269ee150bda4da7c912bd32b65085b1d84a41356653e3dc2e"} Dec 05 07:47:00 crc kubenswrapper[4780]: I1205 07:47:00.881162 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad2b02df-773a-467d-8c04-19f60caf91df" containerID="68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0" exitCode=0 Dec 05 07:47:00 crc kubenswrapper[4780]: I1205 07:47:00.881276 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngwmh" event={"ID":"ad2b02df-773a-467d-8c04-19f60caf91df","Type":"ContainerDied","Data":"68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0"} Dec 05 07:47:01 crc kubenswrapper[4780]: I1205 07:47:01.889436 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngwmh" event={"ID":"ad2b02df-773a-467d-8c04-19f60caf91df","Type":"ContainerStarted","Data":"f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8"} Dec 05 07:47:01 crc kubenswrapper[4780]: I1205 07:47:01.907073 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ngwmh" podStartSLOduration=2.526681285 podStartE2EDuration="3.907055067s" podCreationTimestamp="2025-12-05 07:46:58 +0000 UTC" firstStartedPulling="2025-12-05 07:46:59.870455471 +0000 UTC m=+3653.939971793" lastFinishedPulling="2025-12-05 07:47:01.250829243 +0000 UTC m=+3655.320345575" observedRunningTime="2025-12-05 07:47:01.902800564 +0000 UTC m=+3655.972316886" watchObservedRunningTime="2025-12-05 07:47:01.907055067 +0000 UTC m=+3655.976571399" Dec 05 07:47:08 crc kubenswrapper[4780]: I1205 07:47:08.436723 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:47:08 crc kubenswrapper[4780]: I1205 07:47:08.437408 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:47:08 crc kubenswrapper[4780]: I1205 07:47:08.481459 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:47:09 crc kubenswrapper[4780]: I1205 07:47:09.003461 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:47:09 crc kubenswrapper[4780]: I1205 07:47:09.052284 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngwmh"] Dec 05 07:47:10 crc kubenswrapper[4780]: I1205 07:47:10.976343 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ngwmh" podUID="ad2b02df-773a-467d-8c04-19f60caf91df" containerName="registry-server" containerID="cri-o://f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8" gracePeriod=2 Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.833429 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.909405 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-catalog-content\") pod \"ad2b02df-773a-467d-8c04-19f60caf91df\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.909461 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-utilities\") pod \"ad2b02df-773a-467d-8c04-19f60caf91df\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.909495 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzvdr\" (UniqueName: \"kubernetes.io/projected/ad2b02df-773a-467d-8c04-19f60caf91df-kube-api-access-tzvdr\") pod \"ad2b02df-773a-467d-8c04-19f60caf91df\" (UID: \"ad2b02df-773a-467d-8c04-19f60caf91df\") " Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.910505 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-utilities" (OuterVolumeSpecName: "utilities") pod "ad2b02df-773a-467d-8c04-19f60caf91df" (UID: "ad2b02df-773a-467d-8c04-19f60caf91df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.914975 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad2b02df-773a-467d-8c04-19f60caf91df-kube-api-access-tzvdr" (OuterVolumeSpecName: "kube-api-access-tzvdr") pod "ad2b02df-773a-467d-8c04-19f60caf91df" (UID: "ad2b02df-773a-467d-8c04-19f60caf91df"). InnerVolumeSpecName "kube-api-access-tzvdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.928698 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad2b02df-773a-467d-8c04-19f60caf91df" (UID: "ad2b02df-773a-467d-8c04-19f60caf91df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.983856 4780 generic.go:334] "Generic (PLEG): container finished" podID="ad2b02df-773a-467d-8c04-19f60caf91df" containerID="f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8" exitCode=0 Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.983919 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngwmh" event={"ID":"ad2b02df-773a-467d-8c04-19f60caf91df","Type":"ContainerDied","Data":"f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8"} Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.983971 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngwmh" event={"ID":"ad2b02df-773a-467d-8c04-19f60caf91df","Type":"ContainerDied","Data":"75d5f5ca87ee5f0269ee150bda4da7c912bd32b65085b1d84a41356653e3dc2e"} Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.983991 4780 scope.go:117] "RemoveContainer" containerID="f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8" Dec 05 07:47:11 crc kubenswrapper[4780]: I1205 07:47:11.983993 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngwmh" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:11.999731 4780 scope.go:117] "RemoveContainer" containerID="68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.015461 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.015548 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad2b02df-773a-467d-8c04-19f60caf91df-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.015569 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzvdr\" (UniqueName: \"kubernetes.io/projected/ad2b02df-773a-467d-8c04-19f60caf91df-kube-api-access-tzvdr\") on node \"crc\" DevicePath \"\"" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.023029 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngwmh"] Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.028326 4780 scope.go:117] "RemoveContainer" containerID="e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.031096 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngwmh"] Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.046856 4780 scope.go:117] "RemoveContainer" containerID="f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8" Dec 05 07:47:12 crc kubenswrapper[4780]: E1205 07:47:12.047384 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8\": container with ID starting with f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8 not found: ID does not exist" containerID="f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.047422 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8"} err="failed to get container status \"f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8\": rpc error: code = NotFound desc = could not find container \"f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8\": container with ID starting with f4d66f4aa0638e7220ce57fd777a4bd2745f51bfd4fc68eeacb44d5b5131ecd8 not found: ID does not exist" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.047468 4780 scope.go:117] "RemoveContainer" containerID="68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0" Dec 05 07:47:12 crc kubenswrapper[4780]: E1205 07:47:12.048035 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0\": container with ID starting with 68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0 not found: ID does not exist" containerID="68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.048065 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0"} err="failed to get container status \"68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0\": rpc error: code = NotFound desc = could not find container \"68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0\": container with ID starting with 68032df7ac9c559b16beff0a1bc8900f8e4cc326faa52964075943097fbbfdc0 not found: ID does not exist" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.048084 4780 scope.go:117] "RemoveContainer" containerID="e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2" Dec 05 07:47:12 crc kubenswrapper[4780]: E1205 07:47:12.048604 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2\": container with ID starting with e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2 not found: ID does not exist" containerID="e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.048640 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2"} err="failed to get container status \"e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2\": rpc error: code = NotFound desc = could not find container \"e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2\": container with ID starting with e8dc7411c54717587978336e50d88071ba9c798d155e3daa5a87cc1c31e633c2 not found: ID does not exist" Dec 05 07:47:12 crc kubenswrapper[4780]: I1205 07:47:12.146918 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad2b02df-773a-467d-8c04-19f60caf91df" path="/var/lib/kubelet/pods/ad2b02df-773a-467d-8c04-19f60caf91df/volumes" Dec 05 07:48:59 crc kubenswrapper[4780]: I1205 07:48:59.908489 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:48:59 crc kubenswrapper[4780]: I1205 07:48:59.909230 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:49:29 crc kubenswrapper[4780]: I1205 07:49:29.907621 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:49:29 crc kubenswrapper[4780]: I1205 07:49:29.908569 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:49:30 crc kubenswrapper[4780]: I1205 07:49:30.879014 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d2xgc"] Dec 05 07:49:30 crc kubenswrapper[4780]: E1205 07:49:30.879811 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2b02df-773a-467d-8c04-19f60caf91df" containerName="extract-content" Dec 05 07:49:30 crc kubenswrapper[4780]: I1205 07:49:30.879837 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2b02df-773a-467d-8c04-19f60caf91df" containerName="extract-content" Dec 05 07:49:30 crc kubenswrapper[4780]: E1205 07:49:30.879860 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2b02df-773a-467d-8c04-19f60caf91df" containerName="extract-utilities" Dec 05 07:49:30 crc kubenswrapper[4780]: I1205 07:49:30.879873 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2b02df-773a-467d-8c04-19f60caf91df" containerName="extract-utilities" Dec 05 07:49:30 crc kubenswrapper[4780]: E1205 07:49:30.879917 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2b02df-773a-467d-8c04-19f60caf91df" containerName="registry-server" Dec 05 07:49:30 crc kubenswrapper[4780]: I1205 07:49:30.879931 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2b02df-773a-467d-8c04-19f60caf91df" containerName="registry-server" Dec 05 07:49:30 crc kubenswrapper[4780]: I1205 07:49:30.880212 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2b02df-773a-467d-8c04-19f60caf91df" containerName="registry-server" Dec 05 07:49:30 crc kubenswrapper[4780]: I1205 07:49:30.882003 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:30 crc kubenswrapper[4780]: I1205 07:49:30.900805 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2xgc"] Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.009826 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7bt\" (UniqueName: \"kubernetes.io/projected/dc6c7117-39ba-450f-b243-37fbb95e7496-kube-api-access-ht7bt\") pod \"redhat-operators-d2xgc\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.009936 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-catalog-content\") pod \"redhat-operators-d2xgc\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.009973 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-utilities\") pod \"redhat-operators-d2xgc\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.111158 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht7bt\" (UniqueName: \"kubernetes.io/projected/dc6c7117-39ba-450f-b243-37fbb95e7496-kube-api-access-ht7bt\") pod \"redhat-operators-d2xgc\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.111230 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-catalog-content\") pod \"redhat-operators-d2xgc\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.111272 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-utilities\") pod \"redhat-operators-d2xgc\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.111816 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-utilities\") pod \"redhat-operators-d2xgc\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.112144 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-catalog-content\") pod \"redhat-operators-d2xgc\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.132438 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht7bt\" (UniqueName: \"kubernetes.io/projected/dc6c7117-39ba-450f-b243-37fbb95e7496-kube-api-access-ht7bt\") pod \"redhat-operators-d2xgc\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.218768 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:31 crc kubenswrapper[4780]: I1205 07:49:31.693919 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2xgc"] Dec 05 07:49:32 crc kubenswrapper[4780]: I1205 07:49:32.071411 4780 generic.go:334] "Generic (PLEG): container finished" podID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerID="7ab55c42262c5d5fec211d9decb7f6fbda52302af55001f5502036baa2e3b5a5" exitCode=0 Dec 05 07:49:32 crc kubenswrapper[4780]: I1205 07:49:32.071504 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2xgc" event={"ID":"dc6c7117-39ba-450f-b243-37fbb95e7496","Type":"ContainerDied","Data":"7ab55c42262c5d5fec211d9decb7f6fbda52302af55001f5502036baa2e3b5a5"} Dec 05 07:49:32 crc kubenswrapper[4780]: I1205 07:49:32.071707 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2xgc" event={"ID":"dc6c7117-39ba-450f-b243-37fbb95e7496","Type":"ContainerStarted","Data":"34496302dbb86a313b2fa63db3ce769514e792aad31e8fb039be9a0380113f1e"} Dec 05 07:49:32 crc kubenswrapper[4780]: I1205 07:49:32.072958 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 07:49:33 crc kubenswrapper[4780]: I1205 07:49:33.080674 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2xgc" event={"ID":"dc6c7117-39ba-450f-b243-37fbb95e7496","Type":"ContainerStarted","Data":"f14acd0d2919922942770e93671c2b2e8011e2bcc3127c5bd47ebc55bea3a555"} Dec 05 07:49:34 crc kubenswrapper[4780]: I1205 07:49:34.091544 4780 generic.go:334] "Generic (PLEG): container finished" podID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerID="f14acd0d2919922942770e93671c2b2e8011e2bcc3127c5bd47ebc55bea3a555" exitCode=0 Dec 05 07:49:34 crc kubenswrapper[4780]: I1205 07:49:34.091587 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2xgc" event={"ID":"dc6c7117-39ba-450f-b243-37fbb95e7496","Type":"ContainerDied","Data":"f14acd0d2919922942770e93671c2b2e8011e2bcc3127c5bd47ebc55bea3a555"} Dec 05 07:49:35 crc kubenswrapper[4780]: I1205 07:49:35.104432 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2xgc" event={"ID":"dc6c7117-39ba-450f-b243-37fbb95e7496","Type":"ContainerStarted","Data":"7b2504713143ffa40172337cde994b6698448de302bbb5a14fd59d85be433464"} Dec 05 07:49:35 crc kubenswrapper[4780]: I1205 07:49:35.135740 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d2xgc" podStartSLOduration=2.7138172860000003 podStartE2EDuration="5.135722608s" podCreationTimestamp="2025-12-05 07:49:30 +0000 UTC" firstStartedPulling="2025-12-05 07:49:32.072754179 +0000 UTC m=+3806.142270511" lastFinishedPulling="2025-12-05 07:49:34.494659471 +0000 UTC m=+3808.564175833" observedRunningTime="2025-12-05 07:49:35.131256546 +0000 UTC m=+3809.200772918" watchObservedRunningTime="2025-12-05 07:49:35.135722608 +0000 UTC m=+3809.205238940" Dec 05 07:49:41 crc kubenswrapper[4780]: I1205 07:49:41.219633 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:41 crc kubenswrapper[4780]: I1205 07:49:41.220011 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:41 crc kubenswrapper[4780]: I1205 07:49:41.496152 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:42 crc kubenswrapper[4780]: I1205 07:49:42.197689 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:42 crc kubenswrapper[4780]: I1205 07:49:42.251918 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2xgc"] Dec 05 07:49:44 crc kubenswrapper[4780]: I1205 07:49:44.164134 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d2xgc" podUID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerName="registry-server" containerID="cri-o://7b2504713143ffa40172337cde994b6698448de302bbb5a14fd59d85be433464" gracePeriod=2 Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.184938 4780 generic.go:334] "Generic (PLEG): container finished" podID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerID="7b2504713143ffa40172337cde994b6698448de302bbb5a14fd59d85be433464" exitCode=0 Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.185020 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2xgc" event={"ID":"dc6c7117-39ba-450f-b243-37fbb95e7496","Type":"ContainerDied","Data":"7b2504713143ffa40172337cde994b6698448de302bbb5a14fd59d85be433464"} Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.479096 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.556604 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht7bt\" (UniqueName: \"kubernetes.io/projected/dc6c7117-39ba-450f-b243-37fbb95e7496-kube-api-access-ht7bt\") pod \"dc6c7117-39ba-450f-b243-37fbb95e7496\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.556762 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-catalog-content\") pod \"dc6c7117-39ba-450f-b243-37fbb95e7496\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.557826 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-utilities" (OuterVolumeSpecName: "utilities") pod "dc6c7117-39ba-450f-b243-37fbb95e7496" (UID: "dc6c7117-39ba-450f-b243-37fbb95e7496"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.556793 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-utilities\") pod \"dc6c7117-39ba-450f-b243-37fbb95e7496\" (UID: \"dc6c7117-39ba-450f-b243-37fbb95e7496\") " Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.560147 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.562340 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6c7117-39ba-450f-b243-37fbb95e7496-kube-api-access-ht7bt" (OuterVolumeSpecName: "kube-api-access-ht7bt") pod "dc6c7117-39ba-450f-b243-37fbb95e7496" (UID: "dc6c7117-39ba-450f-b243-37fbb95e7496"). InnerVolumeSpecName "kube-api-access-ht7bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.661489 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht7bt\" (UniqueName: \"kubernetes.io/projected/dc6c7117-39ba-450f-b243-37fbb95e7496-kube-api-access-ht7bt\") on node \"crc\" DevicePath \"\"" Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.673365 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc6c7117-39ba-450f-b243-37fbb95e7496" (UID: "dc6c7117-39ba-450f-b243-37fbb95e7496"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:49:46 crc kubenswrapper[4780]: I1205 07:49:46.763556 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc6c7117-39ba-450f-b243-37fbb95e7496-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:49:47 crc kubenswrapper[4780]: I1205 07:49:47.193572 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2xgc" event={"ID":"dc6c7117-39ba-450f-b243-37fbb95e7496","Type":"ContainerDied","Data":"34496302dbb86a313b2fa63db3ce769514e792aad31e8fb039be9a0380113f1e"} Dec 05 07:49:47 crc kubenswrapper[4780]: I1205 07:49:47.193623 4780 scope.go:117] "RemoveContainer" containerID="7b2504713143ffa40172337cde994b6698448de302bbb5a14fd59d85be433464" Dec 05 07:49:47 crc kubenswrapper[4780]: I1205 07:49:47.193644 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2xgc" Dec 05 07:49:47 crc kubenswrapper[4780]: I1205 07:49:47.213256 4780 scope.go:117] "RemoveContainer" containerID="f14acd0d2919922942770e93671c2b2e8011e2bcc3127c5bd47ebc55bea3a555" Dec 05 07:49:47 crc kubenswrapper[4780]: I1205 07:49:47.232660 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2xgc"] Dec 05 07:49:47 crc kubenswrapper[4780]: I1205 07:49:47.236627 4780 scope.go:117] "RemoveContainer" containerID="7ab55c42262c5d5fec211d9decb7f6fbda52302af55001f5502036baa2e3b5a5" Dec 05 07:49:47 crc kubenswrapper[4780]: I1205 07:49:47.243174 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d2xgc"] Dec 05 07:49:48 crc kubenswrapper[4780]: I1205 07:49:48.153711 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc6c7117-39ba-450f-b243-37fbb95e7496" path="/var/lib/kubelet/pods/dc6c7117-39ba-450f-b243-37fbb95e7496/volumes" Dec 05 07:49:59 crc kubenswrapper[4780]: I1205 07:49:59.908074 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:49:59 crc kubenswrapper[4780]: I1205 07:49:59.908758 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:49:59 crc kubenswrapper[4780]: I1205 07:49:59.908814 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:49:59 crc kubenswrapper[4780]: I1205 07:49:59.909661 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:49:59 crc kubenswrapper[4780]: I1205 07:49:59.909765 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" gracePeriod=600 Dec 05 07:50:00 crc kubenswrapper[4780]: E1205 07:50:00.029525 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:50:00 crc kubenswrapper[4780]: I1205 07:50:00.291410 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" exitCode=0 Dec 05 07:50:00 crc kubenswrapper[4780]: I1205 07:50:00.291499 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4"} Dec 05 07:50:00 crc kubenswrapper[4780]: I1205 07:50:00.291581 4780 scope.go:117] "RemoveContainer" containerID="50108852f61d5a0d7495b72c6eef39cad56df49bc32160c0df7f74fd3f12987b" Dec 05 07:50:00 crc kubenswrapper[4780]: I1205 07:50:00.292312 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:50:00 crc kubenswrapper[4780]: E1205 07:50:00.292683 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:50:11 crc kubenswrapper[4780]: I1205 07:50:11.139517 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:50:11 crc kubenswrapper[4780]: E1205 07:50:11.140876 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:50:24 crc kubenswrapper[4780]: I1205 07:50:24.138865 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:50:24 crc kubenswrapper[4780]: E1205 07:50:24.139594 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:50:36 crc kubenswrapper[4780]: I1205 07:50:36.142672 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:50:36 crc kubenswrapper[4780]: E1205 07:50:36.143361 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:50:50 crc kubenswrapper[4780]: I1205 07:50:50.139073 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:50:50 crc kubenswrapper[4780]: E1205 07:50:50.140746 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:51:03 crc kubenswrapper[4780]: I1205 07:51:03.138170 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:51:03 crc kubenswrapper[4780]: E1205 07:51:03.138995 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:51:15 crc kubenswrapper[4780]: I1205 07:51:15.139318 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:51:15 crc kubenswrapper[4780]: E1205 07:51:15.140064 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:51:30 crc kubenswrapper[4780]: I1205 07:51:30.139284 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:51:30 crc kubenswrapper[4780]: E1205 07:51:30.140016 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:51:43 crc kubenswrapper[4780]: I1205 07:51:43.139114 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:51:43 crc kubenswrapper[4780]: E1205 07:51:43.140038 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:51:55 crc kubenswrapper[4780]: I1205 07:51:55.139688 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:51:55 crc kubenswrapper[4780]: E1205 07:51:55.140610 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:52:10 crc kubenswrapper[4780]: I1205 07:52:10.138380 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:52:10 crc kubenswrapper[4780]: E1205 07:52:10.139244 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:52:23 crc kubenswrapper[4780]: I1205 07:52:23.139398 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:52:23 crc kubenswrapper[4780]: E1205 07:52:23.140303 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:52:38 crc kubenswrapper[4780]: I1205 07:52:38.139447 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:52:38 crc kubenswrapper[4780]: E1205 07:52:38.140283 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:52:50 crc kubenswrapper[4780]: I1205 07:52:50.139344 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:52:50 crc kubenswrapper[4780]: E1205 07:52:50.140265 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:53:04 crc kubenswrapper[4780]: I1205 07:53:04.138685 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:53:04 crc kubenswrapper[4780]: E1205 07:53:04.139503 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:53:12 crc kubenswrapper[4780]: I1205 07:53:12.782980 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vjkbl"] Dec 05 07:53:12 crc kubenswrapper[4780]: E1205 07:53:12.783899 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerName="registry-server" Dec 05 07:53:12 crc kubenswrapper[4780]: I1205 07:53:12.783911 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerName="registry-server" Dec 05 07:53:12 crc kubenswrapper[4780]: E1205 07:53:12.783926 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerName="extract-utilities" Dec 05 07:53:12 crc kubenswrapper[4780]: I1205 07:53:12.783932 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerName="extract-utilities" Dec 05 07:53:12 crc kubenswrapper[4780]: E1205 07:53:12.783945 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerName="extract-content" Dec 05 07:53:12 crc kubenswrapper[4780]: I1205 07:53:12.783951 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerName="extract-content" Dec 05 07:53:12 crc kubenswrapper[4780]: I1205 07:53:12.784121 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6c7117-39ba-450f-b243-37fbb95e7496" containerName="registry-server" Dec 05 07:53:12 crc kubenswrapper[4780]: I1205 07:53:12.785391 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:12 crc kubenswrapper[4780]: I1205 07:53:12.789967 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjkbl"] Dec 05 07:53:12 crc kubenswrapper[4780]: I1205 07:53:12.955335 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-utilities\") pod \"certified-operators-vjkbl\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:12 crc kubenswrapper[4780]: I1205 07:53:12.955419 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-catalog-content\") pod \"certified-operators-vjkbl\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:12 crc kubenswrapper[4780]: I1205 07:53:12.955671 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbqpb\" (UniqueName: \"kubernetes.io/projected/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-kube-api-access-zbqpb\") pod \"certified-operators-vjkbl\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.056844 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-utilities\") pod \"certified-operators-vjkbl\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.056935 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-catalog-content\") pod \"certified-operators-vjkbl\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.056989 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbqpb\" (UniqueName: \"kubernetes.io/projected/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-kube-api-access-zbqpb\") pod \"certified-operators-vjkbl\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.057477 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-utilities\") pod \"certified-operators-vjkbl\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.057587 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-catalog-content\") pod \"certified-operators-vjkbl\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.077229 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbqpb\" (UniqueName: \"kubernetes.io/projected/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-kube-api-access-zbqpb\") pod \"certified-operators-vjkbl\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.104987 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.564326 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjkbl"] Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.778466 4780 generic.go:334] "Generic (PLEG): container finished" podID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerID="7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2" exitCode=0 Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.778508 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjkbl" event={"ID":"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35","Type":"ContainerDied","Data":"7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2"} Dec 05 07:53:13 crc kubenswrapper[4780]: I1205 07:53:13.778532 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjkbl" event={"ID":"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35","Type":"ContainerStarted","Data":"1643d76698e39cc0a9f26e60f4362e05fa04642a807c5bba0dedca6d5f6bad67"} Dec 05 07:53:14 crc kubenswrapper[4780]: I1205 07:53:14.786676 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjkbl" event={"ID":"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35","Type":"ContainerStarted","Data":"70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d"} Dec 05 07:53:15 crc kubenswrapper[4780]: I1205 07:53:15.797765 4780 generic.go:334] "Generic (PLEG): container finished" podID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerID="70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d" exitCode=0 Dec 05 07:53:15 crc kubenswrapper[4780]: I1205 07:53:15.797814 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjkbl" event={"ID":"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35","Type":"ContainerDied","Data":"70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d"} Dec 05 07:53:16 crc kubenswrapper[4780]: I1205 07:53:16.810413 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjkbl" event={"ID":"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35","Type":"ContainerStarted","Data":"ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec"} Dec 05 07:53:16 crc kubenswrapper[4780]: I1205 07:53:16.842128 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vjkbl" podStartSLOduration=2.435472083 podStartE2EDuration="4.842104259s" podCreationTimestamp="2025-12-05 07:53:12 +0000 UTC" firstStartedPulling="2025-12-05 07:53:13.779693644 +0000 UTC m=+4027.849209976" lastFinishedPulling="2025-12-05 07:53:16.1863258 +0000 UTC m=+4030.255842152" observedRunningTime="2025-12-05 07:53:16.839750114 +0000 UTC m=+4030.909266476" watchObservedRunningTime="2025-12-05 07:53:16.842104259 +0000 UTC m=+4030.911620601" Dec 05 07:53:19 crc kubenswrapper[4780]: I1205 07:53:19.138542 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:53:19 crc kubenswrapper[4780]: E1205 07:53:19.139167 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:53:23 crc kubenswrapper[4780]: I1205 07:53:23.105221 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:23 crc kubenswrapper[4780]: I1205 07:53:23.105597 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:23 crc kubenswrapper[4780]: I1205 07:53:23.175773 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:23 crc kubenswrapper[4780]: I1205 07:53:23.906469 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:23 crc kubenswrapper[4780]: I1205 07:53:23.954338 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjkbl"] Dec 05 07:53:25 crc kubenswrapper[4780]: I1205 07:53:25.874727 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vjkbl" podUID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerName="registry-server" containerID="cri-o://ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec" gracePeriod=2 Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.612307 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fkk7v"] Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.616472 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.636116 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-utilities\") pod \"community-operators-fkk7v\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.636160 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-catalog-content\") pod \"community-operators-fkk7v\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.636196 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb5xv\" (UniqueName: \"kubernetes.io/projected/4f999cc5-c12b-467b-97c8-3e7fcd830528-kube-api-access-pb5xv\") pod \"community-operators-fkk7v\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.639860 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkk7v"] Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.737309 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-utilities\") pod \"community-operators-fkk7v\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.737838 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-catalog-content\") pod \"community-operators-fkk7v\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.738002 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb5xv\" (UniqueName: \"kubernetes.io/projected/4f999cc5-c12b-467b-97c8-3e7fcd830528-kube-api-access-pb5xv\") pod \"community-operators-fkk7v\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.739005 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-utilities\") pod \"community-operators-fkk7v\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.739059 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-catalog-content\") pod \"community-operators-fkk7v\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.756738 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb5xv\" (UniqueName: \"kubernetes.io/projected/4f999cc5-c12b-467b-97c8-3e7fcd830528-kube-api-access-pb5xv\") pod \"community-operators-fkk7v\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.797185 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.883080 4780 generic.go:334] "Generic (PLEG): container finished" podID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerID="ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec" exitCode=0 Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.883809 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjkbl" event={"ID":"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35","Type":"ContainerDied","Data":"ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec"} Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.883927 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjkbl" event={"ID":"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35","Type":"ContainerDied","Data":"1643d76698e39cc0a9f26e60f4362e05fa04642a807c5bba0dedca6d5f6bad67"} Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.884002 4780 scope.go:117] "RemoveContainer" containerID="ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.884177 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjkbl" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.905144 4780 scope.go:117] "RemoveContainer" containerID="70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.934686 4780 scope.go:117] "RemoveContainer" containerID="7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.967190 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.972087 4780 scope.go:117] "RemoveContainer" containerID="ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec" Dec 05 07:53:26 crc kubenswrapper[4780]: E1205 07:53:26.972847 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec\": container with ID starting with ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec not found: ID does not exist" containerID="ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.972972 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec"} err="failed to get container status \"ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec\": rpc error: code = NotFound desc = could not find container \"ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec\": container with ID starting with ed03aba16debe3f73c2693c11f83c16015937c147320808303a70f5b09b489ec not found: ID does not exist" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.973064 4780 scope.go:117] "RemoveContainer" containerID="70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d" Dec 05 07:53:26 crc kubenswrapper[4780]: E1205 07:53:26.975754 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d\": container with ID starting with 70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d not found: ID does not exist" containerID="70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.975867 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d"} err="failed to get container status \"70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d\": rpc error: code = NotFound desc = could not find container \"70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d\": container with ID starting with 70f932470d7defc99f385fcb5f9a2a492f55b18f98e8d0fa422533e23652e00d not found: ID does not exist" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.975980 4780 scope.go:117] "RemoveContainer" containerID="7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.977466 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-utilities\") pod \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.977565 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbqpb\" (UniqueName: \"kubernetes.io/projected/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-kube-api-access-zbqpb\") pod \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.977619 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-catalog-content\") pod \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\" (UID: \"1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35\") " Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.978399 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-utilities" (OuterVolumeSpecName: "utilities") pod "1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" (UID: "1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.981484 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-kube-api-access-zbqpb" (OuterVolumeSpecName: "kube-api-access-zbqpb") pod "1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" (UID: "1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35"). InnerVolumeSpecName "kube-api-access-zbqpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:53:26 crc kubenswrapper[4780]: E1205 07:53:26.982760 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2\": container with ID starting with 7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2 not found: ID does not exist" containerID="7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2" Dec 05 07:53:26 crc kubenswrapper[4780]: I1205 07:53:26.982902 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2"} err="failed to get container status \"7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2\": rpc error: code = NotFound desc = could not find container \"7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2\": container with ID starting with 7a9511d4187e5fb0674311c324bdced20c76e5ef28a393dbb4d09ef35db6e3f2 not found: ID does not exist" Dec 05 07:53:27 crc kubenswrapper[4780]: I1205 07:53:27.045445 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" (UID: "1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:53:27 crc kubenswrapper[4780]: I1205 07:53:27.079660 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:53:27 crc kubenswrapper[4780]: I1205 07:53:27.079701 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbqpb\" (UniqueName: \"kubernetes.io/projected/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-kube-api-access-zbqpb\") on node \"crc\" DevicePath \"\"" Dec 05 07:53:27 crc kubenswrapper[4780]: I1205 07:53:27.079713 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:53:27 crc kubenswrapper[4780]: I1205 07:53:27.226056 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjkbl"] Dec 05 07:53:27 crc kubenswrapper[4780]: I1205 07:53:27.236847 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vjkbl"] Dec 05 07:53:27 crc kubenswrapper[4780]: I1205 07:53:27.474576 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fkk7v"] Dec 05 07:53:27 crc kubenswrapper[4780]: I1205 07:53:27.892178 4780 generic.go:334] "Generic (PLEG): container finished" podID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerID="2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8" exitCode=0 Dec 05 07:53:27 crc kubenswrapper[4780]: I1205 07:53:27.892235 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkk7v" event={"ID":"4f999cc5-c12b-467b-97c8-3e7fcd830528","Type":"ContainerDied","Data":"2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8"} Dec 05 07:53:27 crc kubenswrapper[4780]: I1205 07:53:27.892371 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkk7v" event={"ID":"4f999cc5-c12b-467b-97c8-3e7fcd830528","Type":"ContainerStarted","Data":"8d4653858915f9fc780247c4a06db04e804308a7b80685c39742a38ce3c16efa"} Dec 05 07:53:28 crc kubenswrapper[4780]: I1205 07:53:28.147816 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" path="/var/lib/kubelet/pods/1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35/volumes" Dec 05 07:53:28 crc kubenswrapper[4780]: I1205 07:53:28.903118 4780 generic.go:334] "Generic (PLEG): container finished" podID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerID="c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9" exitCode=0 Dec 05 07:53:28 crc kubenswrapper[4780]: I1205 07:53:28.903184 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkk7v" event={"ID":"4f999cc5-c12b-467b-97c8-3e7fcd830528","Type":"ContainerDied","Data":"c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9"} Dec 05 07:53:29 crc kubenswrapper[4780]: I1205 07:53:29.912556 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkk7v" event={"ID":"4f999cc5-c12b-467b-97c8-3e7fcd830528","Type":"ContainerStarted","Data":"d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17"} Dec 05 07:53:29 crc kubenswrapper[4780]: I1205 07:53:29.934615 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fkk7v" podStartSLOduration=2.436699974 podStartE2EDuration="3.934594537s" podCreationTimestamp="2025-12-05 07:53:26 +0000 UTC" firstStartedPulling="2025-12-05 07:53:27.895135884 +0000 UTC m=+4041.964652216" lastFinishedPulling="2025-12-05 07:53:29.393030437 +0000 UTC m=+4043.462546779" observedRunningTime="2025-12-05 07:53:29.931091672 +0000 UTC m=+4044.000608004" watchObservedRunningTime="2025-12-05 07:53:29.934594537 +0000 UTC m=+4044.004110869" Dec 05 07:53:31 crc kubenswrapper[4780]: I1205 07:53:31.139403 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:53:31 crc kubenswrapper[4780]: E1205 07:53:31.139954 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:53:36 crc kubenswrapper[4780]: I1205 07:53:36.967630 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:36 crc kubenswrapper[4780]: I1205 07:53:36.968011 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:37 crc kubenswrapper[4780]: I1205 07:53:37.013478 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:38 crc kubenswrapper[4780]: I1205 07:53:38.034763 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:38 crc kubenswrapper[4780]: I1205 07:53:38.090949 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fkk7v"] Dec 05 07:53:39 crc kubenswrapper[4780]: I1205 07:53:39.987300 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fkk7v" podUID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerName="registry-server" containerID="cri-o://d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17" gracePeriod=2 Dec 05 07:53:40 crc kubenswrapper[4780]: I1205 07:53:40.933540 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:40 crc kubenswrapper[4780]: I1205 07:53:40.997164 4780 generic.go:334] "Generic (PLEG): container finished" podID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerID="d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17" exitCode=0 Dec 05 07:53:40 crc kubenswrapper[4780]: I1205 07:53:40.997206 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkk7v" event={"ID":"4f999cc5-c12b-467b-97c8-3e7fcd830528","Type":"ContainerDied","Data":"d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17"} Dec 05 07:53:40 crc kubenswrapper[4780]: I1205 07:53:40.997244 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fkk7v" event={"ID":"4f999cc5-c12b-467b-97c8-3e7fcd830528","Type":"ContainerDied","Data":"8d4653858915f9fc780247c4a06db04e804308a7b80685c39742a38ce3c16efa"} Dec 05 07:53:40 crc kubenswrapper[4780]: I1205 07:53:40.997261 4780 scope.go:117] "RemoveContainer" containerID="d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17" Dec 05 07:53:40 crc kubenswrapper[4780]: I1205 07:53:40.997266 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fkk7v" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.016043 4780 scope.go:117] "RemoveContainer" containerID="c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.031962 4780 scope.go:117] "RemoveContainer" containerID="2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.052136 4780 scope.go:117] "RemoveContainer" containerID="d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17" Dec 05 07:53:41 crc kubenswrapper[4780]: E1205 07:53:41.052821 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17\": container with ID starting with d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17 not found: ID does not exist" containerID="d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.052857 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17"} err="failed to get container status \"d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17\": rpc error: code = NotFound desc = could not find container \"d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17\": container with ID starting with d34c0e036ea59af75a6a70a865990d4ccab97d10aa29eda8d5e1366cca719d17 not found: ID does not exist" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.052893 4780 scope.go:117] "RemoveContainer" containerID="c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9" Dec 05 07:53:41 crc kubenswrapper[4780]: E1205 07:53:41.053158 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9\": container with ID starting with c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9 not found: ID does not exist" containerID="c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.053208 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9"} err="failed to get container status \"c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9\": rpc error: code = NotFound desc = could not find container \"c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9\": container with ID starting with c12179af40d5daa7e8fb3a5eab4c8fff056aa9b99305ac12db9110dfad212ed9 not found: ID does not exist" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.053246 4780 scope.go:117] "RemoveContainer" containerID="2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8" Dec 05 07:53:41 crc kubenswrapper[4780]: E1205 07:53:41.053561 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8\": container with ID starting with 2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8 not found: ID does not exist" containerID="2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.053594 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8"} err="failed to get container status \"2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8\": rpc error: code = NotFound desc = could not find container \"2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8\": container with ID starting with 2514b95c69c6158a3bcf60a8c29e7801c9b13cb8e0228b20cfda889e669e10d8 not found: ID does not exist" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.084337 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-utilities\") pod \"4f999cc5-c12b-467b-97c8-3e7fcd830528\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.084554 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-catalog-content\") pod \"4f999cc5-c12b-467b-97c8-3e7fcd830528\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.084722 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb5xv\" (UniqueName: \"kubernetes.io/projected/4f999cc5-c12b-467b-97c8-3e7fcd830528-kube-api-access-pb5xv\") pod \"4f999cc5-c12b-467b-97c8-3e7fcd830528\" (UID: \"4f999cc5-c12b-467b-97c8-3e7fcd830528\") " Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.085292 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-utilities" (OuterVolumeSpecName: "utilities") pod "4f999cc5-c12b-467b-97c8-3e7fcd830528" (UID: "4f999cc5-c12b-467b-97c8-3e7fcd830528"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.089838 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f999cc5-c12b-467b-97c8-3e7fcd830528-kube-api-access-pb5xv" (OuterVolumeSpecName: "kube-api-access-pb5xv") pod "4f999cc5-c12b-467b-97c8-3e7fcd830528" (UID: "4f999cc5-c12b-467b-97c8-3e7fcd830528"). InnerVolumeSpecName "kube-api-access-pb5xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.131793 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f999cc5-c12b-467b-97c8-3e7fcd830528" (UID: "4f999cc5-c12b-467b-97c8-3e7fcd830528"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.186340 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb5xv\" (UniqueName: \"kubernetes.io/projected/4f999cc5-c12b-467b-97c8-3e7fcd830528-kube-api-access-pb5xv\") on node \"crc\" DevicePath \"\"" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.186373 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.186384 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f999cc5-c12b-467b-97c8-3e7fcd830528-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.331630 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fkk7v"] Dec 05 07:53:41 crc kubenswrapper[4780]: I1205 07:53:41.337538 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fkk7v"] Dec 05 07:53:42 crc kubenswrapper[4780]: I1205 07:53:42.154614 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f999cc5-c12b-467b-97c8-3e7fcd830528" path="/var/lib/kubelet/pods/4f999cc5-c12b-467b-97c8-3e7fcd830528/volumes" Dec 05 07:53:45 crc kubenswrapper[4780]: I1205 07:53:45.139583 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:53:45 crc kubenswrapper[4780]: E1205 07:53:45.140235 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:53:58 crc kubenswrapper[4780]: I1205 07:53:58.138782 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:53:58 crc kubenswrapper[4780]: E1205 07:53:58.139485 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:54:11 crc kubenswrapper[4780]: I1205 07:54:11.139313 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:54:11 crc kubenswrapper[4780]: E1205 07:54:11.140570 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:54:24 crc kubenswrapper[4780]: I1205 07:54:24.139385 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:54:24 crc kubenswrapper[4780]: E1205 07:54:24.140141 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:54:39 crc kubenswrapper[4780]: I1205 07:54:39.138645 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:54:39 crc kubenswrapper[4780]: E1205 07:54:39.139360 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:54:50 crc kubenswrapper[4780]: I1205 07:54:50.138362 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:54:50 crc kubenswrapper[4780]: E1205 07:54:50.139103 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 07:55:03 crc kubenswrapper[4780]: I1205 07:55:03.138772 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:55:03 crc kubenswrapper[4780]: I1205 07:55:03.600589 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"4546bcf2a40d4ac48fed7e0c449ae4e25baf3884c711ab6fdaca0ce61e695605"} Dec 05 07:57:29 crc kubenswrapper[4780]: I1205 07:57:29.908239 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:57:29 crc kubenswrapper[4780]: I1205 07:57:29.908853 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:57:59 crc kubenswrapper[4780]: I1205 07:57:59.907951 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:57:59 crc kubenswrapper[4780]: I1205 07:57:59.908458 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.687917 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hf2qc"] Dec 05 07:58:08 crc kubenswrapper[4780]: E1205 07:58:08.689449 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerName="extract-content" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.689463 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerName="extract-content" Dec 05 07:58:08 crc kubenswrapper[4780]: E1205 07:58:08.689502 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerName="extract-utilities" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.689509 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerName="extract-utilities" Dec 05 07:58:08 crc kubenswrapper[4780]: E1205 07:58:08.689521 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerName="extract-content" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.689528 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerName="extract-content" Dec 05 07:58:08 crc kubenswrapper[4780]: E1205 07:58:08.689541 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerName="registry-server" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.689547 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerName="registry-server" Dec 05 07:58:08 crc kubenswrapper[4780]: E1205 07:58:08.689578 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerName="extract-utilities" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.689584 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerName="extract-utilities" Dec 05 07:58:08 crc kubenswrapper[4780]: E1205 07:58:08.689609 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerName="registry-server" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.689614 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerName="registry-server" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.689776 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a2bf7d4-18c7-4ad0-9330-85f4a9c7af35" containerName="registry-server" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.689791 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f999cc5-c12b-467b-97c8-3e7fcd830528" containerName="registry-server" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.690765 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.706280 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf2qc"] Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.789979 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvtgh\" (UniqueName: \"kubernetes.io/projected/c8f0057a-c260-4ad6-8d99-7256707ed0ca-kube-api-access-hvtgh\") pod \"redhat-marketplace-hf2qc\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.790063 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-catalog-content\") pod \"redhat-marketplace-hf2qc\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.790113 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-utilities\") pod \"redhat-marketplace-hf2qc\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.891664 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-utilities\") pod \"redhat-marketplace-hf2qc\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.891785 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvtgh\" (UniqueName: \"kubernetes.io/projected/c8f0057a-c260-4ad6-8d99-7256707ed0ca-kube-api-access-hvtgh\") pod \"redhat-marketplace-hf2qc\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.891913 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-catalog-content\") pod \"redhat-marketplace-hf2qc\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.892390 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-utilities\") pod \"redhat-marketplace-hf2qc\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.892561 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-catalog-content\") pod \"redhat-marketplace-hf2qc\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:08 crc kubenswrapper[4780]: I1205 07:58:08.913525 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvtgh\" (UniqueName: \"kubernetes.io/projected/c8f0057a-c260-4ad6-8d99-7256707ed0ca-kube-api-access-hvtgh\") pod \"redhat-marketplace-hf2qc\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:09 crc kubenswrapper[4780]: I1205 07:58:09.010357 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:09 crc kubenswrapper[4780]: I1205 07:58:09.303529 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf2qc"] Dec 05 07:58:10 crc kubenswrapper[4780]: I1205 07:58:10.004957 4780 generic.go:334] "Generic (PLEG): container finished" podID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerID="d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e" exitCode=0 Dec 05 07:58:10 crc kubenswrapper[4780]: I1205 07:58:10.006844 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf2qc" event={"ID":"c8f0057a-c260-4ad6-8d99-7256707ed0ca","Type":"ContainerDied","Data":"d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e"} Dec 05 07:58:10 crc kubenswrapper[4780]: I1205 07:58:10.006955 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf2qc" event={"ID":"c8f0057a-c260-4ad6-8d99-7256707ed0ca","Type":"ContainerStarted","Data":"5bc3a15286ab5fd2d1866ae7e220ef37ebca01bde42f5f93e4164254af51f771"} Dec 05 07:58:10 crc kubenswrapper[4780]: I1205 07:58:10.007008 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 07:58:11 crc kubenswrapper[4780]: I1205 07:58:11.012536 4780 generic.go:334] "Generic (PLEG): container finished" podID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerID="c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99" exitCode=0 Dec 05 07:58:11 crc kubenswrapper[4780]: I1205 07:58:11.012602 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf2qc" event={"ID":"c8f0057a-c260-4ad6-8d99-7256707ed0ca","Type":"ContainerDied","Data":"c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99"} Dec 05 07:58:12 crc kubenswrapper[4780]: I1205 07:58:12.020195 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf2qc" event={"ID":"c8f0057a-c260-4ad6-8d99-7256707ed0ca","Type":"ContainerStarted","Data":"f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8"} Dec 05 07:58:12 crc kubenswrapper[4780]: I1205 07:58:12.038811 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hf2qc" podStartSLOduration=2.622673307 podStartE2EDuration="4.038794483s" podCreationTimestamp="2025-12-05 07:58:08 +0000 UTC" firstStartedPulling="2025-12-05 07:58:10.006792643 +0000 UTC m=+4324.076308975" lastFinishedPulling="2025-12-05 07:58:11.422913779 +0000 UTC m=+4325.492430151" observedRunningTime="2025-12-05 07:58:12.033793045 +0000 UTC m=+4326.103309377" watchObservedRunningTime="2025-12-05 07:58:12.038794483 +0000 UTC m=+4326.108310805" Dec 05 07:58:19 crc kubenswrapper[4780]: I1205 07:58:19.011146 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:19 crc kubenswrapper[4780]: I1205 07:58:19.011660 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:19 crc kubenswrapper[4780]: I1205 07:58:19.062580 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:19 crc kubenswrapper[4780]: I1205 07:58:19.129267 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:19 crc kubenswrapper[4780]: I1205 07:58:19.295819 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf2qc"] Dec 05 07:58:21 crc kubenswrapper[4780]: I1205 07:58:21.087974 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hf2qc" podUID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerName="registry-server" containerID="cri-o://f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8" gracePeriod=2 Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.074562 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.101947 4780 generic.go:334] "Generic (PLEG): container finished" podID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerID="f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8" exitCode=0 Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.101986 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf2qc" event={"ID":"c8f0057a-c260-4ad6-8d99-7256707ed0ca","Type":"ContainerDied","Data":"f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8"} Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.102037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf2qc" event={"ID":"c8f0057a-c260-4ad6-8d99-7256707ed0ca","Type":"ContainerDied","Data":"5bc3a15286ab5fd2d1866ae7e220ef37ebca01bde42f5f93e4164254af51f771"} Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.102046 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf2qc" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.102058 4780 scope.go:117] "RemoveContainer" containerID="f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.120582 4780 scope.go:117] "RemoveContainer" containerID="c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.136194 4780 scope.go:117] "RemoveContainer" containerID="d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.157818 4780 scope.go:117] "RemoveContainer" containerID="f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8" Dec 05 07:58:22 crc kubenswrapper[4780]: E1205 07:58:22.160139 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8\": container with ID starting with f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8 not found: ID does not exist" containerID="f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.160179 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8"} err="failed to get container status \"f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8\": rpc error: code = NotFound desc = could not find container \"f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8\": container with ID starting with f7081bfa34d57ea2580d29feaa3962a10272c2d05aa6278b0bf696486ba631e8 not found: ID does not exist" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.160205 4780 scope.go:117] "RemoveContainer" containerID="c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99" Dec 05 07:58:22 crc kubenswrapper[4780]: E1205 07:58:22.160819 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99\": container with ID starting with c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99 not found: ID does not exist" containerID="c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.160846 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99"} err="failed to get container status \"c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99\": rpc error: code = NotFound desc = could not find container \"c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99\": container with ID starting with c8edb48f6881b2d4bd00f8294e0143ca80ffb604488dbc8c1bbbce492f2ddc99 not found: ID does not exist" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.160864 4780 scope.go:117] "RemoveContainer" containerID="d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e" Dec 05 07:58:22 crc kubenswrapper[4780]: E1205 07:58:22.161408 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e\": container with ID starting with d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e not found: ID does not exist" containerID="d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.161435 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e"} err="failed to get container status \"d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e\": rpc error: code = NotFound desc = could not find container \"d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e\": container with ID starting with d8dca1c3537f4bcbad966e81e618ef7e9eb0aa7532356267bc74b40f2042650e not found: ID does not exist" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.176179 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-catalog-content\") pod \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.176252 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-utilities\") pod \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.176284 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvtgh\" (UniqueName: \"kubernetes.io/projected/c8f0057a-c260-4ad6-8d99-7256707ed0ca-kube-api-access-hvtgh\") pod \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\" (UID: \"c8f0057a-c260-4ad6-8d99-7256707ed0ca\") " Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.177194 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-utilities" (OuterVolumeSpecName: "utilities") pod "c8f0057a-c260-4ad6-8d99-7256707ed0ca" (UID: "c8f0057a-c260-4ad6-8d99-7256707ed0ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.181350 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f0057a-c260-4ad6-8d99-7256707ed0ca-kube-api-access-hvtgh" (OuterVolumeSpecName: "kube-api-access-hvtgh") pod "c8f0057a-c260-4ad6-8d99-7256707ed0ca" (UID: "c8f0057a-c260-4ad6-8d99-7256707ed0ca"). InnerVolumeSpecName "kube-api-access-hvtgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.195733 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8f0057a-c260-4ad6-8d99-7256707ed0ca" (UID: "c8f0057a-c260-4ad6-8d99-7256707ed0ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.278444 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.278479 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8f0057a-c260-4ad6-8d99-7256707ed0ca-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.278492 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvtgh\" (UniqueName: \"kubernetes.io/projected/c8f0057a-c260-4ad6-8d99-7256707ed0ca-kube-api-access-hvtgh\") on node \"crc\" DevicePath \"\"" Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.438691 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf2qc"] Dec 05 07:58:22 crc kubenswrapper[4780]: I1205 07:58:22.443948 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf2qc"] Dec 05 07:58:24 crc kubenswrapper[4780]: I1205 07:58:24.146444 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" path="/var/lib/kubelet/pods/c8f0057a-c260-4ad6-8d99-7256707ed0ca/volumes" Dec 05 07:58:29 crc kubenswrapper[4780]: I1205 07:58:29.907990 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 07:58:29 crc kubenswrapper[4780]: I1205 07:58:29.908991 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 07:58:29 crc kubenswrapper[4780]: I1205 07:58:29.909060 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 07:58:29 crc kubenswrapper[4780]: I1205 07:58:29.910046 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4546bcf2a40d4ac48fed7e0c449ae4e25baf3884c711ab6fdaca0ce61e695605"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 07:58:29 crc kubenswrapper[4780]: I1205 07:58:29.910113 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://4546bcf2a40d4ac48fed7e0c449ae4e25baf3884c711ab6fdaca0ce61e695605" gracePeriod=600 Dec 05 07:58:30 crc kubenswrapper[4780]: I1205 07:58:30.161114 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="4546bcf2a40d4ac48fed7e0c449ae4e25baf3884c711ab6fdaca0ce61e695605" exitCode=0 Dec 05 07:58:30 crc kubenswrapper[4780]: I1205 07:58:30.161180 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"4546bcf2a40d4ac48fed7e0c449ae4e25baf3884c711ab6fdaca0ce61e695605"} Dec 05 07:58:30 crc kubenswrapper[4780]: I1205 07:58:30.161245 4780 scope.go:117] "RemoveContainer" containerID="c60f3a72d8899e23f71555b36261b39af39679f141214ac5aa293a7f698184d4" Dec 05 07:58:31 crc kubenswrapper[4780]: I1205 07:58:31.171345 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b"} Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.720846 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-shfpj"] Dec 05 07:59:57 crc kubenswrapper[4780]: E1205 07:59:57.721661 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerName="extract-utilities" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.721676 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerName="extract-utilities" Dec 05 07:59:57 crc kubenswrapper[4780]: E1205 07:59:57.721697 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerName="extract-content" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.721705 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerName="extract-content" Dec 05 07:59:57 crc kubenswrapper[4780]: E1205 07:59:57.721723 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerName="registry-server" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.721734 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerName="registry-server" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.721938 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f0057a-c260-4ad6-8d99-7256707ed0ca" containerName="registry-server" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.723249 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.736591 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shfpj"] Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.847236 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rc75\" (UniqueName: \"kubernetes.io/projected/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-kube-api-access-8rc75\") pod \"redhat-operators-shfpj\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.847318 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-utilities\") pod \"redhat-operators-shfpj\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.847370 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-catalog-content\") pod \"redhat-operators-shfpj\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.948864 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rc75\" (UniqueName: \"kubernetes.io/projected/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-kube-api-access-8rc75\") pod \"redhat-operators-shfpj\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.948925 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-utilities\") pod \"redhat-operators-shfpj\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.948955 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-catalog-content\") pod \"redhat-operators-shfpj\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.949438 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-utilities\") pod \"redhat-operators-shfpj\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.949455 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-catalog-content\") pod \"redhat-operators-shfpj\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:57 crc kubenswrapper[4780]: I1205 07:59:57.971688 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rc75\" (UniqueName: \"kubernetes.io/projected/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-kube-api-access-8rc75\") pod \"redhat-operators-shfpj\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:58 crc kubenswrapper[4780]: I1205 07:59:58.042322 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 07:59:58 crc kubenswrapper[4780]: I1205 07:59:58.523165 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shfpj"] Dec 05 07:59:58 crc kubenswrapper[4780]: I1205 07:59:58.878507 4780 generic.go:334] "Generic (PLEG): container finished" podID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerID="d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f" exitCode=0 Dec 05 07:59:58 crc kubenswrapper[4780]: I1205 07:59:58.878551 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shfpj" event={"ID":"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc","Type":"ContainerDied","Data":"d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f"} Dec 05 07:59:58 crc kubenswrapper[4780]: I1205 07:59:58.878577 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shfpj" event={"ID":"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc","Type":"ContainerStarted","Data":"0ff4debc8e2c5337cbc62f846e717a6875a86c33f81acb169afea75a8303ae2d"} Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.190714 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls"] Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.192137 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.196098 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.196359 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.202665 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls"] Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.281483 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f81a97-0e86-4d9b-adde-f4a0810c763a-secret-volume\") pod \"collect-profiles-29415360-hrbls\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.281570 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q6gq\" (UniqueName: \"kubernetes.io/projected/69f81a97-0e86-4d9b-adde-f4a0810c763a-kube-api-access-4q6gq\") pod \"collect-profiles-29415360-hrbls\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.281598 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f81a97-0e86-4d9b-adde-f4a0810c763a-config-volume\") pod \"collect-profiles-29415360-hrbls\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.383134 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f81a97-0e86-4d9b-adde-f4a0810c763a-secret-volume\") pod \"collect-profiles-29415360-hrbls\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.383231 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q6gq\" (UniqueName: \"kubernetes.io/projected/69f81a97-0e86-4d9b-adde-f4a0810c763a-kube-api-access-4q6gq\") pod \"collect-profiles-29415360-hrbls\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.383256 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f81a97-0e86-4d9b-adde-f4a0810c763a-config-volume\") pod \"collect-profiles-29415360-hrbls\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.384195 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f81a97-0e86-4d9b-adde-f4a0810c763a-config-volume\") pod \"collect-profiles-29415360-hrbls\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.397810 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f81a97-0e86-4d9b-adde-f4a0810c763a-secret-volume\") pod \"collect-profiles-29415360-hrbls\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.399808 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q6gq\" (UniqueName: \"kubernetes.io/projected/69f81a97-0e86-4d9b-adde-f4a0810c763a-kube-api-access-4q6gq\") pod \"collect-profiles-29415360-hrbls\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.512007 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.717542 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls"] Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.895020 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" event={"ID":"69f81a97-0e86-4d9b-adde-f4a0810c763a","Type":"ContainerStarted","Data":"6edef5c8914467c4e032e854c3fad9b8fd7ebb3463143be9156eb21e1e288b60"} Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.895343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" event={"ID":"69f81a97-0e86-4d9b-adde-f4a0810c763a","Type":"ContainerStarted","Data":"594d7bec686b976aef70b6dc57548e7fd261f03849d0defe565a70e06d24e5ab"} Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.897261 4780 generic.go:334] "Generic (PLEG): container finished" podID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerID="3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2" exitCode=0 Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.897302 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shfpj" event={"ID":"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc","Type":"ContainerDied","Data":"3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2"} Dec 05 08:00:00 crc kubenswrapper[4780]: I1205 08:00:00.918700 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" podStartSLOduration=0.918676342 podStartE2EDuration="918.676342ms" podCreationTimestamp="2025-12-05 08:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:00:00.913124359 +0000 UTC m=+4434.982640691" watchObservedRunningTime="2025-12-05 08:00:00.918676342 +0000 UTC m=+4434.988192674" Dec 05 08:00:01 crc kubenswrapper[4780]: I1205 08:00:01.905536 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shfpj" event={"ID":"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc","Type":"ContainerStarted","Data":"699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4"} Dec 05 08:00:01 crc kubenswrapper[4780]: I1205 08:00:01.906717 4780 generic.go:334] "Generic (PLEG): container finished" podID="69f81a97-0e86-4d9b-adde-f4a0810c763a" containerID="6edef5c8914467c4e032e854c3fad9b8fd7ebb3463143be9156eb21e1e288b60" exitCode=0 Dec 05 08:00:01 crc kubenswrapper[4780]: I1205 08:00:01.906750 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" event={"ID":"69f81a97-0e86-4d9b-adde-f4a0810c763a","Type":"ContainerDied","Data":"6edef5c8914467c4e032e854c3fad9b8fd7ebb3463143be9156eb21e1e288b60"} Dec 05 08:00:01 crc kubenswrapper[4780]: I1205 08:00:01.922775 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-shfpj" podStartSLOduration=2.519944946 podStartE2EDuration="4.922753728s" podCreationTimestamp="2025-12-05 07:59:57 +0000 UTC" firstStartedPulling="2025-12-05 07:59:58.879860624 +0000 UTC m=+4432.949376946" lastFinishedPulling="2025-12-05 08:00:01.282669396 +0000 UTC m=+4435.352185728" observedRunningTime="2025-12-05 08:00:01.917950085 +0000 UTC m=+4435.987466417" watchObservedRunningTime="2025-12-05 08:00:01.922753728 +0000 UTC m=+4435.992270060" Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.177636 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.223238 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q6gq\" (UniqueName: \"kubernetes.io/projected/69f81a97-0e86-4d9b-adde-f4a0810c763a-kube-api-access-4q6gq\") pod \"69f81a97-0e86-4d9b-adde-f4a0810c763a\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.223336 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f81a97-0e86-4d9b-adde-f4a0810c763a-secret-volume\") pod \"69f81a97-0e86-4d9b-adde-f4a0810c763a\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.223435 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f81a97-0e86-4d9b-adde-f4a0810c763a-config-volume\") pod \"69f81a97-0e86-4d9b-adde-f4a0810c763a\" (UID: \"69f81a97-0e86-4d9b-adde-f4a0810c763a\") " Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.224239 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f81a97-0e86-4d9b-adde-f4a0810c763a-config-volume" (OuterVolumeSpecName: "config-volume") pod "69f81a97-0e86-4d9b-adde-f4a0810c763a" (UID: "69f81a97-0e86-4d9b-adde-f4a0810c763a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.231093 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f81a97-0e86-4d9b-adde-f4a0810c763a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "69f81a97-0e86-4d9b-adde-f4a0810c763a" (UID: "69f81a97-0e86-4d9b-adde-f4a0810c763a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.231631 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f81a97-0e86-4d9b-adde-f4a0810c763a-kube-api-access-4q6gq" (OuterVolumeSpecName: "kube-api-access-4q6gq") pod "69f81a97-0e86-4d9b-adde-f4a0810c763a" (UID: "69f81a97-0e86-4d9b-adde-f4a0810c763a"). InnerVolumeSpecName "kube-api-access-4q6gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.324953 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f81a97-0e86-4d9b-adde-f4a0810c763a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.324994 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f81a97-0e86-4d9b-adde-f4a0810c763a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.325005 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q6gq\" (UniqueName: \"kubernetes.io/projected/69f81a97-0e86-4d9b-adde-f4a0810c763a-kube-api-access-4q6gq\") on node \"crc\" DevicePath \"\"" Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.923663 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" event={"ID":"69f81a97-0e86-4d9b-adde-f4a0810c763a","Type":"ContainerDied","Data":"594d7bec686b976aef70b6dc57548e7fd261f03849d0defe565a70e06d24e5ab"} Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.923704 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="594d7bec686b976aef70b6dc57548e7fd261f03849d0defe565a70e06d24e5ab" Dec 05 08:00:03 crc kubenswrapper[4780]: I1205 08:00:03.923804 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls" Dec 05 08:00:04 crc kubenswrapper[4780]: I1205 08:00:04.244318 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g"] Dec 05 08:00:04 crc kubenswrapper[4780]: I1205 08:00:04.249187 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415315-m4v7g"] Dec 05 08:00:06 crc kubenswrapper[4780]: I1205 08:00:06.150244 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb2c6359-8e13-4d63-a8f8-15a24b9a3141" path="/var/lib/kubelet/pods/eb2c6359-8e13-4d63-a8f8-15a24b9a3141/volumes" Dec 05 08:00:08 crc kubenswrapper[4780]: I1205 08:00:08.042469 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 08:00:08 crc kubenswrapper[4780]: I1205 08:00:08.042841 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 08:00:08 crc kubenswrapper[4780]: I1205 08:00:08.085452 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 08:00:08 crc kubenswrapper[4780]: I1205 08:00:08.993769 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 08:00:09 crc kubenswrapper[4780]: I1205 08:00:09.036236 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shfpj"] Dec 05 08:00:10 crc kubenswrapper[4780]: I1205 08:00:10.968499 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-shfpj" podUID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerName="registry-server" containerID="cri-o://699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4" gracePeriod=2 Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.602495 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.667030 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-catalog-content\") pod \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.667395 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-utilities\") pod \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.667527 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rc75\" (UniqueName: \"kubernetes.io/projected/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-kube-api-access-8rc75\") pod \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\" (UID: \"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc\") " Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.669752 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-utilities" (OuterVolumeSpecName: "utilities") pod "c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" (UID: "c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.672967 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-kube-api-access-8rc75" (OuterVolumeSpecName: "kube-api-access-8rc75") pod "c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" (UID: "c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc"). InnerVolumeSpecName "kube-api-access-8rc75". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.768830 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.769161 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rc75\" (UniqueName: \"kubernetes.io/projected/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-kube-api-access-8rc75\") on node \"crc\" DevicePath \"\"" Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.777256 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" (UID: "c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.870651 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.983734 4780 generic.go:334] "Generic (PLEG): container finished" podID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerID="699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4" exitCode=0 Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.983774 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shfpj" event={"ID":"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc","Type":"ContainerDied","Data":"699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4"} Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.983797 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shfpj" event={"ID":"c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc","Type":"ContainerDied","Data":"0ff4debc8e2c5337cbc62f846e717a6875a86c33f81acb169afea75a8303ae2d"} Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.983813 4780 scope.go:117] "RemoveContainer" containerID="699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4" Dec 05 08:00:12 crc kubenswrapper[4780]: I1205 08:00:12.984116 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shfpj" Dec 05 08:00:13 crc kubenswrapper[4780]: I1205 08:00:13.005869 4780 scope.go:117] "RemoveContainer" containerID="3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2" Dec 05 08:00:13 crc kubenswrapper[4780]: I1205 08:00:13.027455 4780 scope.go:117] "RemoveContainer" containerID="d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f" Dec 05 08:00:13 crc kubenswrapper[4780]: I1205 08:00:13.028642 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shfpj"] Dec 05 08:00:13 crc kubenswrapper[4780]: I1205 08:00:13.034261 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-shfpj"] Dec 05 08:00:13 crc kubenswrapper[4780]: I1205 08:00:13.064584 4780 scope.go:117] "RemoveContainer" containerID="699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4" Dec 05 08:00:13 crc kubenswrapper[4780]: E1205 08:00:13.065411 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4\": container with ID starting with 699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4 not found: ID does not exist" containerID="699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4" Dec 05 08:00:13 crc kubenswrapper[4780]: I1205 08:00:13.065540 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4"} err="failed to get container status \"699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4\": rpc error: code = NotFound desc = could not find container \"699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4\": container with ID starting with 699fb5a5bea75548d09872fe47d48e9332fa29ea8f19b2cf0e761643985215e4 not found: ID does not exist" Dec 05 08:00:13 crc kubenswrapper[4780]: I1205 08:00:13.065631 4780 scope.go:117] "RemoveContainer" containerID="3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2" Dec 05 08:00:13 crc kubenswrapper[4780]: E1205 08:00:13.066105 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2\": container with ID starting with 3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2 not found: ID does not exist" containerID="3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2" Dec 05 08:00:13 crc kubenswrapper[4780]: I1205 08:00:13.066186 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2"} err="failed to get container status \"3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2\": rpc error: code = NotFound desc = could not find container \"3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2\": container with ID starting with 3e9611e3c512959d098342bed74cbd3df4258b3aee05ac3378dba9fd1dbf81f2 not found: ID does not exist" Dec 05 08:00:13 crc kubenswrapper[4780]: I1205 08:00:13.066262 4780 scope.go:117] "RemoveContainer" containerID="d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f" Dec 05 08:00:13 crc kubenswrapper[4780]: E1205 08:00:13.066830 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f\": container with ID starting with d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f not found: ID does not exist" containerID="d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f" Dec 05 08:00:13 crc kubenswrapper[4780]: I1205 08:00:13.066959 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f"} err="failed to get container status \"d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f\": rpc error: code = NotFound desc = could not find container \"d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f\": container with ID starting with d91856632a075aeb462b31201beeed9b92d879d8b05c7be05b2ceda143c28b1f not found: ID does not exist" Dec 05 08:00:14 crc kubenswrapper[4780]: I1205 08:00:14.146851 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" path="/var/lib/kubelet/pods/c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc/volumes" Dec 05 08:00:24 crc kubenswrapper[4780]: I1205 08:00:24.554859 4780 scope.go:117] "RemoveContainer" containerID="e7ebb91c55625c71b7cb1095927aed82ac910dc8efd5a7c07eaaa9faaf373d72" Dec 05 08:00:59 crc kubenswrapper[4780]: I1205 08:00:59.908259 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:00:59 crc kubenswrapper[4780]: I1205 08:00:59.908963 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:01:29 crc kubenswrapper[4780]: I1205 08:01:29.908241 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:01:29 crc kubenswrapper[4780]: I1205 08:01:29.908951 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:01:59 crc kubenswrapper[4780]: I1205 08:01:59.907481 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:01:59 crc kubenswrapper[4780]: I1205 08:01:59.908101 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:01:59 crc kubenswrapper[4780]: I1205 08:01:59.908154 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 08:01:59 crc kubenswrapper[4780]: I1205 08:01:59.908789 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 08:01:59 crc kubenswrapper[4780]: I1205 08:01:59.908852 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" gracePeriod=600 Dec 05 08:02:00 crc kubenswrapper[4780]: E1205 08:02:00.367519 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:02:00 crc kubenswrapper[4780]: I1205 08:02:00.786992 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" exitCode=0 Dec 05 08:02:00 crc kubenswrapper[4780]: I1205 08:02:00.787035 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b"} Dec 05 08:02:00 crc kubenswrapper[4780]: I1205 08:02:00.787067 4780 scope.go:117] "RemoveContainer" containerID="4546bcf2a40d4ac48fed7e0c449ae4e25baf3884c711ab6fdaca0ce61e695605" Dec 05 08:02:00 crc kubenswrapper[4780]: I1205 08:02:00.787784 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:02:00 crc kubenswrapper[4780]: E1205 08:02:00.788238 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:02:12 crc kubenswrapper[4780]: I1205 08:02:12.138767 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:02:12 crc kubenswrapper[4780]: E1205 08:02:12.139578 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:02:24 crc kubenswrapper[4780]: I1205 08:02:24.881690 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-krvz9"] Dec 05 08:02:24 crc kubenswrapper[4780]: I1205 08:02:24.890996 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-krvz9"] Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.023033 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-j9v7g"] Dec 05 08:02:25 crc kubenswrapper[4780]: E1205 08:02:25.023622 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f81a97-0e86-4d9b-adde-f4a0810c763a" containerName="collect-profiles" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.023731 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f81a97-0e86-4d9b-adde-f4a0810c763a" containerName="collect-profiles" Dec 05 08:02:25 crc kubenswrapper[4780]: E1205 08:02:25.023821 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerName="registry-server" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.023949 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerName="registry-server" Dec 05 08:02:25 crc kubenswrapper[4780]: E1205 08:02:25.024059 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerName="extract-utilities" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.024143 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerName="extract-utilities" Dec 05 08:02:25 crc kubenswrapper[4780]: E1205 08:02:25.024239 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerName="extract-content" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.024319 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerName="extract-content" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.024561 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f81a97-0e86-4d9b-adde-f4a0810c763a" containerName="collect-profiles" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.024666 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c93f4db0-e2d8-48ea-9bca-f2bd7e26d4bc" containerName="registry-server" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.025348 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.027778 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.027870 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.028044 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.028610 4780 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5555f" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.034633 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-j9v7g"] Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.068007 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvc58\" (UniqueName: \"kubernetes.io/projected/aa47c212-6b1c-49ce-a909-6083e9280528-kube-api-access-hvc58\") pod \"crc-storage-crc-j9v7g\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.068196 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aa47c212-6b1c-49ce-a909-6083e9280528-node-mnt\") pod \"crc-storage-crc-j9v7g\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.068284 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aa47c212-6b1c-49ce-a909-6083e9280528-crc-storage\") pod \"crc-storage-crc-j9v7g\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.170748 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aa47c212-6b1c-49ce-a909-6083e9280528-node-mnt\") pod \"crc-storage-crc-j9v7g\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.171010 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aa47c212-6b1c-49ce-a909-6083e9280528-crc-storage\") pod \"crc-storage-crc-j9v7g\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.171146 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvc58\" (UniqueName: \"kubernetes.io/projected/aa47c212-6b1c-49ce-a909-6083e9280528-kube-api-access-hvc58\") pod \"crc-storage-crc-j9v7g\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.171214 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aa47c212-6b1c-49ce-a909-6083e9280528-node-mnt\") pod \"crc-storage-crc-j9v7g\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.172152 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aa47c212-6b1c-49ce-a909-6083e9280528-crc-storage\") pod \"crc-storage-crc-j9v7g\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.197066 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvc58\" (UniqueName: \"kubernetes.io/projected/aa47c212-6b1c-49ce-a909-6083e9280528-kube-api-access-hvc58\") pod \"crc-storage-crc-j9v7g\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.361279 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.797304 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-j9v7g"] Dec 05 08:02:25 crc kubenswrapper[4780]: I1205 08:02:25.971622 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-j9v7g" event={"ID":"aa47c212-6b1c-49ce-a909-6083e9280528","Type":"ContainerStarted","Data":"32ef3acc859c588572127bec11dc52d2db138c9aadc9c8123926d0d92337508f"} Dec 05 08:02:26 crc kubenswrapper[4780]: I1205 08:02:26.148911 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e573324-e383-4147-a05d-57261c0d5645" path="/var/lib/kubelet/pods/8e573324-e383-4147-a05d-57261c0d5645/volumes" Dec 05 08:02:26 crc kubenswrapper[4780]: I1205 08:02:26.981126 4780 generic.go:334] "Generic (PLEG): container finished" podID="aa47c212-6b1c-49ce-a909-6083e9280528" containerID="76868575cea8b1fccbea16fb4257c4be118d069f51364f9ac3fac9a71b7b6241" exitCode=0 Dec 05 08:02:26 crc kubenswrapper[4780]: I1205 08:02:26.981203 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-j9v7g" event={"ID":"aa47c212-6b1c-49ce-a909-6083e9280528","Type":"ContainerDied","Data":"76868575cea8b1fccbea16fb4257c4be118d069f51364f9ac3fac9a71b7b6241"} Dec 05 08:02:27 crc kubenswrapper[4780]: I1205 08:02:27.138296 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:02:27 crc kubenswrapper[4780]: E1205 08:02:27.138660 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.263649 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.315996 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvc58\" (UniqueName: \"kubernetes.io/projected/aa47c212-6b1c-49ce-a909-6083e9280528-kube-api-access-hvc58\") pod \"aa47c212-6b1c-49ce-a909-6083e9280528\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.316107 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aa47c212-6b1c-49ce-a909-6083e9280528-node-mnt\") pod \"aa47c212-6b1c-49ce-a909-6083e9280528\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.316218 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aa47c212-6b1c-49ce-a909-6083e9280528-crc-storage\") pod \"aa47c212-6b1c-49ce-a909-6083e9280528\" (UID: \"aa47c212-6b1c-49ce-a909-6083e9280528\") " Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.317554 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa47c212-6b1c-49ce-a909-6083e9280528-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "aa47c212-6b1c-49ce-a909-6083e9280528" (UID: "aa47c212-6b1c-49ce-a909-6083e9280528"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.317933 4780 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/aa47c212-6b1c-49ce-a909-6083e9280528-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.322587 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa47c212-6b1c-49ce-a909-6083e9280528-kube-api-access-hvc58" (OuterVolumeSpecName: "kube-api-access-hvc58") pod "aa47c212-6b1c-49ce-a909-6083e9280528" (UID: "aa47c212-6b1c-49ce-a909-6083e9280528"). InnerVolumeSpecName "kube-api-access-hvc58". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.334142 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa47c212-6b1c-49ce-a909-6083e9280528-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "aa47c212-6b1c-49ce-a909-6083e9280528" (UID: "aa47c212-6b1c-49ce-a909-6083e9280528"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.418930 4780 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/aa47c212-6b1c-49ce-a909-6083e9280528-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.418961 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvc58\" (UniqueName: \"kubernetes.io/projected/aa47c212-6b1c-49ce-a909-6083e9280528-kube-api-access-hvc58\") on node \"crc\" DevicePath \"\"" Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.995762 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-j9v7g" event={"ID":"aa47c212-6b1c-49ce-a909-6083e9280528","Type":"ContainerDied","Data":"32ef3acc859c588572127bec11dc52d2db138c9aadc9c8123926d0d92337508f"} Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.996123 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ef3acc859c588572127bec11dc52d2db138c9aadc9c8123926d0d92337508f" Dec 05 08:02:28 crc kubenswrapper[4780]: I1205 08:02:28.996185 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-j9v7g" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.644530 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-j9v7g"] Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.651936 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-j9v7g"] Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.772020 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-dtxzn"] Dec 05 08:02:30 crc kubenswrapper[4780]: E1205 08:02:30.772364 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa47c212-6b1c-49ce-a909-6083e9280528" containerName="storage" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.772385 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa47c212-6b1c-49ce-a909-6083e9280528" containerName="storage" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.772623 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa47c212-6b1c-49ce-a909-6083e9280528" containerName="storage" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.773362 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.777082 4780 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5555f" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.777617 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.777791 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.777909 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.780305 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-dtxzn"] Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.848285 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ea3215b-d313-4567-a8a8-178fe49dde00-crc-storage\") pod \"crc-storage-crc-dtxzn\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.848368 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm6mn\" (UniqueName: \"kubernetes.io/projected/3ea3215b-d313-4567-a8a8-178fe49dde00-kube-api-access-sm6mn\") pod \"crc-storage-crc-dtxzn\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.848409 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ea3215b-d313-4567-a8a8-178fe49dde00-node-mnt\") pod \"crc-storage-crc-dtxzn\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.949826 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ea3215b-d313-4567-a8a8-178fe49dde00-crc-storage\") pod \"crc-storage-crc-dtxzn\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.950485 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm6mn\" (UniqueName: \"kubernetes.io/projected/3ea3215b-d313-4567-a8a8-178fe49dde00-kube-api-access-sm6mn\") pod \"crc-storage-crc-dtxzn\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.950532 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ea3215b-d313-4567-a8a8-178fe49dde00-node-mnt\") pod \"crc-storage-crc-dtxzn\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.950652 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ea3215b-d313-4567-a8a8-178fe49dde00-crc-storage\") pod \"crc-storage-crc-dtxzn\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.950821 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ea3215b-d313-4567-a8a8-178fe49dde00-node-mnt\") pod \"crc-storage-crc-dtxzn\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:30 crc kubenswrapper[4780]: I1205 08:02:30.974333 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm6mn\" (UniqueName: \"kubernetes.io/projected/3ea3215b-d313-4567-a8a8-178fe49dde00-kube-api-access-sm6mn\") pod \"crc-storage-crc-dtxzn\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:31 crc kubenswrapper[4780]: I1205 08:02:31.098474 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:31 crc kubenswrapper[4780]: I1205 08:02:31.538887 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-dtxzn"] Dec 05 08:02:31 crc kubenswrapper[4780]: W1205 08:02:31.545742 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ea3215b_d313_4567_a8a8_178fe49dde00.slice/crio-6e7f2cc7b757658f08241b9a1094f00740ae0d954e9c5ec71b6d1f82831715b8 WatchSource:0}: Error finding container 6e7f2cc7b757658f08241b9a1094f00740ae0d954e9c5ec71b6d1f82831715b8: Status 404 returned error can't find the container with id 6e7f2cc7b757658f08241b9a1094f00740ae0d954e9c5ec71b6d1f82831715b8 Dec 05 08:02:32 crc kubenswrapper[4780]: I1205 08:02:32.016243 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dtxzn" event={"ID":"3ea3215b-d313-4567-a8a8-178fe49dde00","Type":"ContainerStarted","Data":"6e7f2cc7b757658f08241b9a1094f00740ae0d954e9c5ec71b6d1f82831715b8"} Dec 05 08:02:32 crc kubenswrapper[4780]: I1205 08:02:32.151111 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa47c212-6b1c-49ce-a909-6083e9280528" path="/var/lib/kubelet/pods/aa47c212-6b1c-49ce-a909-6083e9280528/volumes" Dec 05 08:02:33 crc kubenswrapper[4780]: I1205 08:02:33.030122 4780 generic.go:334] "Generic (PLEG): container finished" podID="3ea3215b-d313-4567-a8a8-178fe49dde00" containerID="d039e867644b8dc941ab5e862f6b6b9ccc542ec5386d0adccb8cff2d3bc31b1c" exitCode=0 Dec 05 08:02:33 crc kubenswrapper[4780]: I1205 08:02:33.030165 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dtxzn" event={"ID":"3ea3215b-d313-4567-a8a8-178fe49dde00","Type":"ContainerDied","Data":"d039e867644b8dc941ab5e862f6b6b9ccc542ec5386d0adccb8cff2d3bc31b1c"} Dec 05 08:02:34 crc kubenswrapper[4780]: I1205 08:02:34.269365 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:34 crc kubenswrapper[4780]: I1205 08:02:34.416602 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ea3215b-d313-4567-a8a8-178fe49dde00-crc-storage\") pod \"3ea3215b-d313-4567-a8a8-178fe49dde00\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " Dec 05 08:02:34 crc kubenswrapper[4780]: I1205 08:02:34.416657 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm6mn\" (UniqueName: \"kubernetes.io/projected/3ea3215b-d313-4567-a8a8-178fe49dde00-kube-api-access-sm6mn\") pod \"3ea3215b-d313-4567-a8a8-178fe49dde00\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " Dec 05 08:02:34 crc kubenswrapper[4780]: I1205 08:02:34.416703 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ea3215b-d313-4567-a8a8-178fe49dde00-node-mnt\") pod \"3ea3215b-d313-4567-a8a8-178fe49dde00\" (UID: \"3ea3215b-d313-4567-a8a8-178fe49dde00\") " Dec 05 08:02:34 crc kubenswrapper[4780]: I1205 08:02:34.417048 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea3215b-d313-4567-a8a8-178fe49dde00-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "3ea3215b-d313-4567-a8a8-178fe49dde00" (UID: "3ea3215b-d313-4567-a8a8-178fe49dde00"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 08:02:34 crc kubenswrapper[4780]: I1205 08:02:34.421162 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea3215b-d313-4567-a8a8-178fe49dde00-kube-api-access-sm6mn" (OuterVolumeSpecName: "kube-api-access-sm6mn") pod "3ea3215b-d313-4567-a8a8-178fe49dde00" (UID: "3ea3215b-d313-4567-a8a8-178fe49dde00"). InnerVolumeSpecName "kube-api-access-sm6mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:02:34 crc kubenswrapper[4780]: I1205 08:02:34.438018 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea3215b-d313-4567-a8a8-178fe49dde00-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "3ea3215b-d313-4567-a8a8-178fe49dde00" (UID: "3ea3215b-d313-4567-a8a8-178fe49dde00"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:02:34 crc kubenswrapper[4780]: I1205 08:02:34.518840 4780 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/3ea3215b-d313-4567-a8a8-178fe49dde00-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 05 08:02:34 crc kubenswrapper[4780]: I1205 08:02:34.518885 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm6mn\" (UniqueName: \"kubernetes.io/projected/3ea3215b-d313-4567-a8a8-178fe49dde00-kube-api-access-sm6mn\") on node \"crc\" DevicePath \"\"" Dec 05 08:02:34 crc kubenswrapper[4780]: I1205 08:02:34.518920 4780 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/3ea3215b-d313-4567-a8a8-178fe49dde00-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 05 08:02:35 crc kubenswrapper[4780]: I1205 08:02:35.042970 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-dtxzn" event={"ID":"3ea3215b-d313-4567-a8a8-178fe49dde00","Type":"ContainerDied","Data":"6e7f2cc7b757658f08241b9a1094f00740ae0d954e9c5ec71b6d1f82831715b8"} Dec 05 08:02:35 crc kubenswrapper[4780]: I1205 08:02:35.043018 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-dtxzn" Dec 05 08:02:35 crc kubenswrapper[4780]: I1205 08:02:35.043019 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e7f2cc7b757658f08241b9a1094f00740ae0d954e9c5ec71b6d1f82831715b8" Dec 05 08:02:40 crc kubenswrapper[4780]: I1205 08:02:40.138704 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:02:40 crc kubenswrapper[4780]: E1205 08:02:40.139525 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:02:53 crc kubenswrapper[4780]: I1205 08:02:53.139471 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:02:53 crc kubenswrapper[4780]: E1205 08:02:53.140438 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:03:05 crc kubenswrapper[4780]: I1205 08:03:05.139638 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:03:05 crc kubenswrapper[4780]: E1205 08:03:05.141311 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:03:16 crc kubenswrapper[4780]: I1205 08:03:16.147220 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:03:16 crc kubenswrapper[4780]: E1205 08:03:16.148540 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.339027 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nzrjp"] Dec 05 08:03:18 crc kubenswrapper[4780]: E1205 08:03:18.341003 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea3215b-d313-4567-a8a8-178fe49dde00" containerName="storage" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.341021 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea3215b-d313-4567-a8a8-178fe49dde00" containerName="storage" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.341226 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea3215b-d313-4567-a8a8-178fe49dde00" containerName="storage" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.342296 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.351756 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nzrjp"] Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.429173 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-catalog-content\") pod \"certified-operators-nzrjp\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.429520 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqz8p\" (UniqueName: \"kubernetes.io/projected/19ce7763-04e5-449a-85c1-a1e43e675c29-kube-api-access-kqz8p\") pod \"certified-operators-nzrjp\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.429807 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-utilities\") pod \"certified-operators-nzrjp\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.531443 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqz8p\" (UniqueName: \"kubernetes.io/projected/19ce7763-04e5-449a-85c1-a1e43e675c29-kube-api-access-kqz8p\") pod \"certified-operators-nzrjp\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.531545 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-utilities\") pod \"certified-operators-nzrjp\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.531588 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-catalog-content\") pod \"certified-operators-nzrjp\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.532621 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-utilities\") pod \"certified-operators-nzrjp\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.532940 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-catalog-content\") pod \"certified-operators-nzrjp\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.562490 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqz8p\" (UniqueName: \"kubernetes.io/projected/19ce7763-04e5-449a-85c1-a1e43e675c29-kube-api-access-kqz8p\") pod \"certified-operators-nzrjp\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:18 crc kubenswrapper[4780]: I1205 08:03:18.679942 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:19 crc kubenswrapper[4780]: I1205 08:03:19.159281 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nzrjp"] Dec 05 08:03:19 crc kubenswrapper[4780]: I1205 08:03:19.402570 4780 generic.go:334] "Generic (PLEG): container finished" podID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerID="340ca0c0610c0dfa5b0507faa04981bea2e69390d36a9735673f022782b5c3bd" exitCode=0 Dec 05 08:03:19 crc kubenswrapper[4780]: I1205 08:03:19.402657 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzrjp" event={"ID":"19ce7763-04e5-449a-85c1-a1e43e675c29","Type":"ContainerDied","Data":"340ca0c0610c0dfa5b0507faa04981bea2e69390d36a9735673f022782b5c3bd"} Dec 05 08:03:19 crc kubenswrapper[4780]: I1205 08:03:19.403273 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzrjp" event={"ID":"19ce7763-04e5-449a-85c1-a1e43e675c29","Type":"ContainerStarted","Data":"f45cb373e3a15a4b41a1ca26173ea3631b8ca3c5c98e4a1f60ea9db37f0685d1"} Dec 05 08:03:19 crc kubenswrapper[4780]: I1205 08:03:19.404771 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 08:03:21 crc kubenswrapper[4780]: I1205 08:03:21.420029 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzrjp" event={"ID":"19ce7763-04e5-449a-85c1-a1e43e675c29","Type":"ContainerStarted","Data":"d2562aa63501fefa02b5d51840f8eea4c5859b164f4fc7631cccf06b61e937df"} Dec 05 08:03:22 crc kubenswrapper[4780]: I1205 08:03:22.431261 4780 generic.go:334] "Generic (PLEG): container finished" podID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerID="d2562aa63501fefa02b5d51840f8eea4c5859b164f4fc7631cccf06b61e937df" exitCode=0 Dec 05 08:03:22 crc kubenswrapper[4780]: I1205 08:03:22.431323 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzrjp" event={"ID":"19ce7763-04e5-449a-85c1-a1e43e675c29","Type":"ContainerDied","Data":"d2562aa63501fefa02b5d51840f8eea4c5859b164f4fc7631cccf06b61e937df"} Dec 05 08:03:23 crc kubenswrapper[4780]: I1205 08:03:23.440210 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzrjp" event={"ID":"19ce7763-04e5-449a-85c1-a1e43e675c29","Type":"ContainerStarted","Data":"94c3751b2cd0ecdd165c7d264d401ab81e62a0cd6997537dbe9b97e5b139fae1"} Dec 05 08:03:23 crc kubenswrapper[4780]: I1205 08:03:23.461664 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nzrjp" podStartSLOduration=1.705837363 podStartE2EDuration="5.461646828s" podCreationTimestamp="2025-12-05 08:03:18 +0000 UTC" firstStartedPulling="2025-12-05 08:03:19.40454356 +0000 UTC m=+4633.474059882" lastFinishedPulling="2025-12-05 08:03:23.160353005 +0000 UTC m=+4637.229869347" observedRunningTime="2025-12-05 08:03:23.456795665 +0000 UTC m=+4637.526311997" watchObservedRunningTime="2025-12-05 08:03:23.461646828 +0000 UTC m=+4637.531163160" Dec 05 08:03:24 crc kubenswrapper[4780]: I1205 08:03:24.634410 4780 scope.go:117] "RemoveContainer" containerID="7b19da8777ffdc66d66308af0b7e15972540dc8726fdfe9e7aa30fe9b8d504fd" Dec 05 08:03:28 crc kubenswrapper[4780]: I1205 08:03:28.680387 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:28 crc kubenswrapper[4780]: I1205 08:03:28.680678 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:28 crc kubenswrapper[4780]: I1205 08:03:28.738622 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:29 crc kubenswrapper[4780]: I1205 08:03:29.531122 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:29 crc kubenswrapper[4780]: I1205 08:03:29.586155 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nzrjp"] Dec 05 08:03:31 crc kubenswrapper[4780]: I1205 08:03:31.139035 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:03:31 crc kubenswrapper[4780]: E1205 08:03:31.139485 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:03:31 crc kubenswrapper[4780]: I1205 08:03:31.503788 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nzrjp" podUID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerName="registry-server" containerID="cri-o://94c3751b2cd0ecdd165c7d264d401ab81e62a0cd6997537dbe9b97e5b139fae1" gracePeriod=2 Dec 05 08:03:32 crc kubenswrapper[4780]: I1205 08:03:32.511485 4780 generic.go:334] "Generic (PLEG): container finished" podID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerID="94c3751b2cd0ecdd165c7d264d401ab81e62a0cd6997537dbe9b97e5b139fae1" exitCode=0 Dec 05 08:03:32 crc kubenswrapper[4780]: I1205 08:03:32.511538 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzrjp" event={"ID":"19ce7763-04e5-449a-85c1-a1e43e675c29","Type":"ContainerDied","Data":"94c3751b2cd0ecdd165c7d264d401ab81e62a0cd6997537dbe9b97e5b139fae1"} Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.160659 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.236431 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-utilities\") pod \"19ce7763-04e5-449a-85c1-a1e43e675c29\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.236513 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-catalog-content\") pod \"19ce7763-04e5-449a-85c1-a1e43e675c29\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.236655 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqz8p\" (UniqueName: \"kubernetes.io/projected/19ce7763-04e5-449a-85c1-a1e43e675c29-kube-api-access-kqz8p\") pod \"19ce7763-04e5-449a-85c1-a1e43e675c29\" (UID: \"19ce7763-04e5-449a-85c1-a1e43e675c29\") " Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.237653 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-utilities" (OuterVolumeSpecName: "utilities") pod "19ce7763-04e5-449a-85c1-a1e43e675c29" (UID: "19ce7763-04e5-449a-85c1-a1e43e675c29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.241802 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ce7763-04e5-449a-85c1-a1e43e675c29-kube-api-access-kqz8p" (OuterVolumeSpecName: "kube-api-access-kqz8p") pod "19ce7763-04e5-449a-85c1-a1e43e675c29" (UID: "19ce7763-04e5-449a-85c1-a1e43e675c29"). InnerVolumeSpecName "kube-api-access-kqz8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.285657 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19ce7763-04e5-449a-85c1-a1e43e675c29" (UID: "19ce7763-04e5-449a-85c1-a1e43e675c29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.338085 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqz8p\" (UniqueName: \"kubernetes.io/projected/19ce7763-04e5-449a-85c1-a1e43e675c29-kube-api-access-kqz8p\") on node \"crc\" DevicePath \"\"" Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.338117 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.338128 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19ce7763-04e5-449a-85c1-a1e43e675c29-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.519927 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzrjp" event={"ID":"19ce7763-04e5-449a-85c1-a1e43e675c29","Type":"ContainerDied","Data":"f45cb373e3a15a4b41a1ca26173ea3631b8ca3c5c98e4a1f60ea9db37f0685d1"} Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.520005 4780 scope.go:117] "RemoveContainer" containerID="94c3751b2cd0ecdd165c7d264d401ab81e62a0cd6997537dbe9b97e5b139fae1" Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.520075 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzrjp" Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.548912 4780 scope.go:117] "RemoveContainer" containerID="d2562aa63501fefa02b5d51840f8eea4c5859b164f4fc7631cccf06b61e937df" Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.571716 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nzrjp"] Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.577802 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nzrjp"] Dec 05 08:03:33 crc kubenswrapper[4780]: I1205 08:03:33.583568 4780 scope.go:117] "RemoveContainer" containerID="340ca0c0610c0dfa5b0507faa04981bea2e69390d36a9735673f022782b5c3bd" Dec 05 08:03:34 crc kubenswrapper[4780]: I1205 08:03:34.498927 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ce7763-04e5-449a-85c1-a1e43e675c29" path="/var/lib/kubelet/pods/19ce7763-04e5-449a-85c1-a1e43e675c29/volumes" Dec 05 08:03:46 crc kubenswrapper[4780]: I1205 08:03:46.147850 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:03:46 crc kubenswrapper[4780]: E1205 08:03:46.148652 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:04:00 crc kubenswrapper[4780]: I1205 08:04:00.140140 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:04:00 crc kubenswrapper[4780]: E1205 08:04:00.142375 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:04:14 crc kubenswrapper[4780]: I1205 08:04:14.138383 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:04:14 crc kubenswrapper[4780]: E1205 08:04:14.139169 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:04:28 crc kubenswrapper[4780]: I1205 08:04:28.138344 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:04:28 crc kubenswrapper[4780]: E1205 08:04:28.139370 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.497768 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f568b98b7-pxzjb"] Dec 05 08:04:38 crc kubenswrapper[4780]: E1205 08:04:38.498747 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerName="extract-utilities" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.498764 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerName="extract-utilities" Dec 05 08:04:38 crc kubenswrapper[4780]: E1205 08:04:38.498793 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerName="registry-server" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.498801 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerName="registry-server" Dec 05 08:04:38 crc kubenswrapper[4780]: E1205 08:04:38.498826 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerName="extract-content" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.498837 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerName="extract-content" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.499054 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ce7763-04e5-449a-85c1-a1e43e675c29" containerName="registry-server" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.499966 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.503631 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwjmc\" (UniqueName: \"kubernetes.io/projected/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-kube-api-access-hwjmc\") pod \"dnsmasq-dns-f568b98b7-pxzjb\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.503678 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-dns-svc\") pod \"dnsmasq-dns-f568b98b7-pxzjb\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.503737 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-config\") pod \"dnsmasq-dns-f568b98b7-pxzjb\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.506648 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.507015 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.521411 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.521477 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.524968 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5kc5l" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.532533 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59864b8655-pq4q6"] Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.534067 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59864b8655-pq4q6" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.551771 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f568b98b7-pxzjb"] Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.565799 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59864b8655-pq4q6"] Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.606203 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb9tt\" (UniqueName: \"kubernetes.io/projected/3345bdcb-2db9-4b87-bda3-0263c740312b-kube-api-access-kb9tt\") pod \"dnsmasq-dns-59864b8655-pq4q6\" (UID: \"3345bdcb-2db9-4b87-bda3-0263c740312b\") " pod="openstack/dnsmasq-dns-59864b8655-pq4q6" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.606268 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-config\") pod \"dnsmasq-dns-f568b98b7-pxzjb\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.606356 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3345bdcb-2db9-4b87-bda3-0263c740312b-config\") pod \"dnsmasq-dns-59864b8655-pq4q6\" (UID: \"3345bdcb-2db9-4b87-bda3-0263c740312b\") " pod="openstack/dnsmasq-dns-59864b8655-pq4q6" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.606390 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwjmc\" (UniqueName: \"kubernetes.io/projected/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-kube-api-access-hwjmc\") pod \"dnsmasq-dns-f568b98b7-pxzjb\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.606412 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-dns-svc\") pod \"dnsmasq-dns-f568b98b7-pxzjb\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.607300 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-dns-svc\") pod \"dnsmasq-dns-f568b98b7-pxzjb\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.607947 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-config\") pod \"dnsmasq-dns-f568b98b7-pxzjb\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.636818 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwjmc\" (UniqueName: \"kubernetes.io/projected/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-kube-api-access-hwjmc\") pod \"dnsmasq-dns-f568b98b7-pxzjb\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.707149 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3345bdcb-2db9-4b87-bda3-0263c740312b-config\") pod \"dnsmasq-dns-59864b8655-pq4q6\" (UID: \"3345bdcb-2db9-4b87-bda3-0263c740312b\") " pod="openstack/dnsmasq-dns-59864b8655-pq4q6" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.707232 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb9tt\" (UniqueName: \"kubernetes.io/projected/3345bdcb-2db9-4b87-bda3-0263c740312b-kube-api-access-kb9tt\") pod \"dnsmasq-dns-59864b8655-pq4q6\" (UID: \"3345bdcb-2db9-4b87-bda3-0263c740312b\") " pod="openstack/dnsmasq-dns-59864b8655-pq4q6" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.708012 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3345bdcb-2db9-4b87-bda3-0263c740312b-config\") pod \"dnsmasq-dns-59864b8655-pq4q6\" (UID: \"3345bdcb-2db9-4b87-bda3-0263c740312b\") " pod="openstack/dnsmasq-dns-59864b8655-pq4q6" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.726471 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb9tt\" (UniqueName: \"kubernetes.io/projected/3345bdcb-2db9-4b87-bda3-0263c740312b-kube-api-access-kb9tt\") pod \"dnsmasq-dns-59864b8655-pq4q6\" (UID: \"3345bdcb-2db9-4b87-bda3-0263c740312b\") " pod="openstack/dnsmasq-dns-59864b8655-pq4q6" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.821358 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:04:38 crc kubenswrapper[4780]: I1205 08:04:38.849620 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59864b8655-pq4q6" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.108265 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59864b8655-pq4q6"] Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.121554 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7859d78785-xzbhx"] Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.122752 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.134620 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7859d78785-xzbhx"] Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.222057 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-config\") pod \"dnsmasq-dns-7859d78785-xzbhx\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.222115 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkcvd\" (UniqueName: \"kubernetes.io/projected/603f710c-c15e-4675-86aa-bf9771447da6-kube-api-access-hkcvd\") pod \"dnsmasq-dns-7859d78785-xzbhx\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.222148 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-dns-svc\") pod \"dnsmasq-dns-7859d78785-xzbhx\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.272617 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59864b8655-pq4q6"] Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.325868 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-config\") pod \"dnsmasq-dns-7859d78785-xzbhx\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.325950 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkcvd\" (UniqueName: \"kubernetes.io/projected/603f710c-c15e-4675-86aa-bf9771447da6-kube-api-access-hkcvd\") pod \"dnsmasq-dns-7859d78785-xzbhx\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.325976 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-dns-svc\") pod \"dnsmasq-dns-7859d78785-xzbhx\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.326969 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-dns-svc\") pod \"dnsmasq-dns-7859d78785-xzbhx\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.327173 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-config\") pod \"dnsmasq-dns-7859d78785-xzbhx\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.353341 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkcvd\" (UniqueName: \"kubernetes.io/projected/603f710c-c15e-4675-86aa-bf9771447da6-kube-api-access-hkcvd\") pod \"dnsmasq-dns-7859d78785-xzbhx\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.380809 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f568b98b7-pxzjb"] Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.416014 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84496478f-9kwnv"] Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.417180 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.427576 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-config\") pod \"dnsmasq-dns-84496478f-9kwnv\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.427622 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwtjc\" (UniqueName: \"kubernetes.io/projected/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-kube-api-access-qwtjc\") pod \"dnsmasq-dns-84496478f-9kwnv\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.427711 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-dns-svc\") pod \"dnsmasq-dns-84496478f-9kwnv\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.443245 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.458522 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f568b98b7-pxzjb"] Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.464588 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84496478f-9kwnv"] Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.528750 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-dns-svc\") pod \"dnsmasq-dns-84496478f-9kwnv\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.528801 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-config\") pod \"dnsmasq-dns-84496478f-9kwnv\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.528832 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwtjc\" (UniqueName: \"kubernetes.io/projected/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-kube-api-access-qwtjc\") pod \"dnsmasq-dns-84496478f-9kwnv\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.529890 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-dns-svc\") pod \"dnsmasq-dns-84496478f-9kwnv\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.530216 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-config\") pod \"dnsmasq-dns-84496478f-9kwnv\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.554303 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwtjc\" (UniqueName: \"kubernetes.io/projected/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-kube-api-access-qwtjc\") pod \"dnsmasq-dns-84496478f-9kwnv\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.731601 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:04:39 crc kubenswrapper[4780]: I1205 08:04:39.926497 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7859d78785-xzbhx"] Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.013964 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" event={"ID":"603f710c-c15e-4675-86aa-bf9771447da6","Type":"ContainerStarted","Data":"57500d156fbfac9c89660659f5c64fca39768c48cdb5174dca9de355adf20f80"} Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.015456 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" event={"ID":"cf03be4c-26ac-49ed-b103-53c0d2f15eb5","Type":"ContainerStarted","Data":"94cbe0de4df42df3e382a6433bab7c72400bfe5679ce9dbab8d943ad9d81f1fb"} Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.017437 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59864b8655-pq4q6" event={"ID":"3345bdcb-2db9-4b87-bda3-0263c740312b","Type":"ContainerStarted","Data":"11f54efd79d581d45e24089d40168b43270bbfaff1fac1090fafda810b3c0772"} Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.254899 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.258860 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.266380 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.266485 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.266600 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-glftw" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.266719 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.266854 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.267043 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.267141 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.290345 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.304937 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84496478f-9kwnv"] Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.448740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.448814 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.448842 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.448913 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c5eef2c-eb08-4610-8639-44aaa305bb28-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.448954 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.449009 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c5eef2c-eb08-4610-8639-44aaa305bb28-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.449135 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.449193 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.449226 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.449266 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.449337 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkrqc\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-kube-api-access-jkrqc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550590 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkrqc\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-kube-api-access-jkrqc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550654 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550695 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550719 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550735 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c5eef2c-eb08-4610-8639-44aaa305bb28-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550753 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550777 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c5eef2c-eb08-4610-8639-44aaa305bb28-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550808 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550830 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550894 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.550923 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.551731 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.551760 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.552411 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.553255 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.557102 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.557645 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.558148 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.558790 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.561191 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c5eef2c-eb08-4610-8639-44aaa305bb28-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.563082 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.564071 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.564943 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.564983 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/09c34a5aa647612aadfbe1efa3ae06313191248567b753dc88465c3bc16a440e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.567691 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.567968 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.568177 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.569361 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c5eef2c-eb08-4610-8639-44aaa305bb28-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.570303 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.571021 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-27xcl" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.571155 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.584780 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.587336 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkrqc\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-kube-api-access-jkrqc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.606356 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755199 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ac8b5d-6f67-4694-a962-a05961a7868c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755248 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg694\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-kube-api-access-hg694\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755376 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755434 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755485 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755505 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755526 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ac8b5d-6f67-4694-a962-a05961a7868c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755560 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755584 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-config-data\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.755767 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.863803 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.863858 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ac8b5d-6f67-4694-a962-a05961a7868c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.863909 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.863934 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-config-data\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.863979 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.864018 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ac8b5d-6f67-4694-a962-a05961a7868c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.864039 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg694\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-kube-api-access-hg694\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.864061 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.864081 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.864102 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.864137 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.864766 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-config-data\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.864847 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.864921 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.866230 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.866409 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.875209 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.884360 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ac8b5d-6f67-4694-a962-a05961a7868c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.887312 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.887334 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/429c0c2d255be0f0018e965f2e9ddbb3fa5ebf67a906fb1e9d33c6536d2e5029/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.888701 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ac8b5d-6f67-4694-a962-a05961a7868c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.899741 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg694\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-kube-api-access-hg694\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.900239 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:40 crc kubenswrapper[4780]: I1205 08:04:40.912919 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.062575 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") pod \"rabbitmq-server-0\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " pod="openstack/rabbitmq-server-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.112320 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84496478f-9kwnv" event={"ID":"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b","Type":"ContainerStarted","Data":"8b225b132f4efbad522b41ccd44067fbdbe481c5af333ebef2ba89d0cb799cd7"} Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.139275 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:04:41 crc kubenswrapper[4780]: E1205 08:04:41.139682 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.188426 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.360343 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.362225 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.364735 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.368057 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.368314 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mdgj6" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.368533 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.378159 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.388452 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.479773 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjhnw\" (UniqueName: \"kubernetes.io/projected/ff360368-93b6-4ab0-b1d6-e53682ec9336-kube-api-access-xjhnw\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.479821 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-178402ff-b611-4579-a267-ff378a105b72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-178402ff-b611-4579-a267-ff378a105b72\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.480013 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff360368-93b6-4ab0-b1d6-e53682ec9336-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.480132 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ff360368-93b6-4ab0-b1d6-e53682ec9336-config-data-default\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.480156 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff360368-93b6-4ab0-b1d6-e53682ec9336-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.480193 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ff360368-93b6-4ab0-b1d6-e53682ec9336-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.480285 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff360368-93b6-4ab0-b1d6-e53682ec9336-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.480378 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ff360368-93b6-4ab0-b1d6-e53682ec9336-kolla-config\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.590444 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff360368-93b6-4ab0-b1d6-e53682ec9336-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.590541 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ff360368-93b6-4ab0-b1d6-e53682ec9336-kolla-config\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.590623 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjhnw\" (UniqueName: \"kubernetes.io/projected/ff360368-93b6-4ab0-b1d6-e53682ec9336-kube-api-access-xjhnw\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.590682 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-178402ff-b611-4579-a267-ff378a105b72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-178402ff-b611-4579-a267-ff378a105b72\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.590795 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff360368-93b6-4ab0-b1d6-e53682ec9336-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.590965 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ff360368-93b6-4ab0-b1d6-e53682ec9336-config-data-default\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.590988 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff360368-93b6-4ab0-b1d6-e53682ec9336-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.591032 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ff360368-93b6-4ab0-b1d6-e53682ec9336-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.591385 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ff360368-93b6-4ab0-b1d6-e53682ec9336-kolla-config\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.591921 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ff360368-93b6-4ab0-b1d6-e53682ec9336-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.592117 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff360368-93b6-4ab0-b1d6-e53682ec9336-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.592167 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ff360368-93b6-4ab0-b1d6-e53682ec9336-config-data-default\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.596146 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.596187 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-178402ff-b611-4579-a267-ff378a105b72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-178402ff-b611-4579-a267-ff378a105b72\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/843ff36c1ca8625b1d1d083fb3d115d16f5e4ab88be92be2eb4801e202dbfbd7/globalmount\"" pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.597946 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff360368-93b6-4ab0-b1d6-e53682ec9336-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.599850 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff360368-93b6-4ab0-b1d6-e53682ec9336-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.613891 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjhnw\" (UniqueName: \"kubernetes.io/projected/ff360368-93b6-4ab0-b1d6-e53682ec9336-kube-api-access-xjhnw\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.637380 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-178402ff-b611-4579-a267-ff378a105b72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-178402ff-b611-4579-a267-ff378a105b72\") pod \"openstack-galera-0\" (UID: \"ff360368-93b6-4ab0-b1d6-e53682ec9336\") " pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.701162 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.717900 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 08:04:41 crc kubenswrapper[4780]: I1205 08:04:41.736772 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 08:04:41 crc kubenswrapper[4780]: W1205 08:04:41.757720 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c5eef2c_eb08_4610_8639_44aaa305bb28.slice/crio-be12363ba27d1dcba5b72ca75dc1007aa72ac51f19d5270d57cc9e3637d4f4ca WatchSource:0}: Error finding container be12363ba27d1dcba5b72ca75dc1007aa72ac51f19d5270d57cc9e3637d4f4ca: Status 404 returned error can't find the container with id be12363ba27d1dcba5b72ca75dc1007aa72ac51f19d5270d57cc9e3637d4f4ca Dec 05 08:04:42 crc kubenswrapper[4780]: I1205 08:04:42.130170 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5ac8b5d-6f67-4694-a962-a05961a7868c","Type":"ContainerStarted","Data":"2755e0d24949901fa33a057a72e59b4b64067608fe34f6fec65875dfed57bc46"} Dec 05 08:04:42 crc kubenswrapper[4780]: I1205 08:04:42.163727 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6c5eef2c-eb08-4610-8639-44aaa305bb28","Type":"ContainerStarted","Data":"be12363ba27d1dcba5b72ca75dc1007aa72ac51f19d5270d57cc9e3637d4f4ca"} Dec 05 08:04:42 crc kubenswrapper[4780]: I1205 08:04:42.341752 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 08:04:42 crc kubenswrapper[4780]: I1205 08:04:42.912959 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 08:04:42 crc kubenswrapper[4780]: I1205 08:04:42.914915 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:42 crc kubenswrapper[4780]: I1205 08:04:42.917436 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8574b" Dec 05 08:04:42 crc kubenswrapper[4780]: I1205 08:04:42.917760 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 05 08:04:42 crc kubenswrapper[4780]: I1205 08:04:42.917799 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 05 08:04:42 crc kubenswrapper[4780]: I1205 08:04:42.917954 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 05 08:04:42 crc kubenswrapper[4780]: I1205 08:04:42.921991 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.024501 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-06858c3b-2553-40d6-bd24-e7fdf969d4fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06858c3b-2553-40d6-bd24-e7fdf969d4fd\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.024562 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/39137b68-813b-4d2a-b543-730c76488431-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.024600 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/39137b68-813b-4d2a-b543-730c76488431-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.024631 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39137b68-813b-4d2a-b543-730c76488431-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.024648 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/39137b68-813b-4d2a-b543-730c76488431-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.024666 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39137b68-813b-4d2a-b543-730c76488431-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.024688 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blqhn\" (UniqueName: \"kubernetes.io/projected/39137b68-813b-4d2a-b543-730c76488431-kube-api-access-blqhn\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.024707 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39137b68-813b-4d2a-b543-730c76488431-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.126921 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-06858c3b-2553-40d6-bd24-e7fdf969d4fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06858c3b-2553-40d6-bd24-e7fdf969d4fd\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.127013 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/39137b68-813b-4d2a-b543-730c76488431-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.127058 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/39137b68-813b-4d2a-b543-730c76488431-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.127106 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39137b68-813b-4d2a-b543-730c76488431-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.127129 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/39137b68-813b-4d2a-b543-730c76488431-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.127175 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39137b68-813b-4d2a-b543-730c76488431-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.128786 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39137b68-813b-4d2a-b543-730c76488431-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.128907 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blqhn\" (UniqueName: \"kubernetes.io/projected/39137b68-813b-4d2a-b543-730c76488431-kube-api-access-blqhn\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.128958 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39137b68-813b-4d2a-b543-730c76488431-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.129053 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/39137b68-813b-4d2a-b543-730c76488431-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.129506 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39137b68-813b-4d2a-b543-730c76488431-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.129905 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/39137b68-813b-4d2a-b543-730c76488431-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.131161 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.131203 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-06858c3b-2553-40d6-bd24-e7fdf969d4fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06858c3b-2553-40d6-bd24-e7fdf969d4fd\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/10a1cce30781c5c8cb1bde34990458ad355bb3e1419af69326a8f0a8acb0c135/globalmount\"" pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.137590 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/39137b68-813b-4d2a-b543-730c76488431-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.138909 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39137b68-813b-4d2a-b543-730c76488431-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.148470 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blqhn\" (UniqueName: \"kubernetes.io/projected/39137b68-813b-4d2a-b543-730c76488431-kube-api-access-blqhn\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.164443 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ff360368-93b6-4ab0-b1d6-e53682ec9336","Type":"ContainerStarted","Data":"1a626fcb9bd2840881499ab7888138ca58b5788c6d3b52b8684b77ef9b34c018"} Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.184622 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-06858c3b-2553-40d6-bd24-e7fdf969d4fd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06858c3b-2553-40d6-bd24-e7fdf969d4fd\") pod \"openstack-cell1-galera-0\" (UID: \"39137b68-813b-4d2a-b543-730c76488431\") " pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.246585 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.455894 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.457622 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.461845 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-47ngb" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.462057 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.464740 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.478690 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.642384 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c190e9-56f3-48c7-a072-4052688197f5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.642531 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28c190e9-56f3-48c7-a072-4052688197f5-kolla-config\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.642558 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28c190e9-56f3-48c7-a072-4052688197f5-config-data\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.642617 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c190e9-56f3-48c7-a072-4052688197f5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.642667 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmmph\" (UniqueName: \"kubernetes.io/projected/28c190e9-56f3-48c7-a072-4052688197f5-kube-api-access-pmmph\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.746172 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmmph\" (UniqueName: \"kubernetes.io/projected/28c190e9-56f3-48c7-a072-4052688197f5-kube-api-access-pmmph\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.746356 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c190e9-56f3-48c7-a072-4052688197f5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.746391 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28c190e9-56f3-48c7-a072-4052688197f5-kolla-config\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.746438 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28c190e9-56f3-48c7-a072-4052688197f5-config-data\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.746478 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c190e9-56f3-48c7-a072-4052688197f5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.747601 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28c190e9-56f3-48c7-a072-4052688197f5-kolla-config\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.747943 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28c190e9-56f3-48c7-a072-4052688197f5-config-data\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.766603 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c190e9-56f3-48c7-a072-4052688197f5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.775169 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28c190e9-56f3-48c7-a072-4052688197f5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.788441 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmmph\" (UniqueName: \"kubernetes.io/projected/28c190e9-56f3-48c7-a072-4052688197f5-kube-api-access-pmmph\") pod \"memcached-0\" (UID: \"28c190e9-56f3-48c7-a072-4052688197f5\") " pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.790472 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 08:04:43 crc kubenswrapper[4780]: I1205 08:04:43.883675 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 08:04:43 crc kubenswrapper[4780]: W1205 08:04:43.925421 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39137b68_813b_4d2a_b543_730c76488431.slice/crio-d95b25d45a54440ffc8f91b5c20b0225a40c2626e30b9d94e6aa7c9f8c181ab1 WatchSource:0}: Error finding container d95b25d45a54440ffc8f91b5c20b0225a40c2626e30b9d94e6aa7c9f8c181ab1: Status 404 returned error can't find the container with id d95b25d45a54440ffc8f91b5c20b0225a40c2626e30b9d94e6aa7c9f8c181ab1 Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.173938 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"39137b68-813b-4d2a-b543-730c76488431","Type":"ContainerStarted","Data":"d95b25d45a54440ffc8f91b5c20b0225a40c2626e30b9d94e6aa7c9f8c181ab1"} Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.275539 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 08:04:44 crc kubenswrapper[4780]: W1205 08:04:44.284211 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28c190e9_56f3_48c7_a072_4052688197f5.slice/crio-f341c200facabca1a6c64a95e90d1e8d82e7e763163748fb35c3ac7dcc89d1aa WatchSource:0}: Error finding container f341c200facabca1a6c64a95e90d1e8d82e7e763163748fb35c3ac7dcc89d1aa: Status 404 returned error can't find the container with id f341c200facabca1a6c64a95e90d1e8d82e7e763163748fb35c3ac7dcc89d1aa Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.614392 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pnqtd"] Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.616950 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.622540 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pnqtd"] Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.779978 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsq9\" (UniqueName: \"kubernetes.io/projected/58d15314-573f-476b-ad30-0f8881002188-kube-api-access-vbsq9\") pod \"community-operators-pnqtd\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.780296 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-catalog-content\") pod \"community-operators-pnqtd\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.780580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-utilities\") pod \"community-operators-pnqtd\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.882545 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbsq9\" (UniqueName: \"kubernetes.io/projected/58d15314-573f-476b-ad30-0f8881002188-kube-api-access-vbsq9\") pod \"community-operators-pnqtd\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.882588 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-catalog-content\") pod \"community-operators-pnqtd\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.882667 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-utilities\") pod \"community-operators-pnqtd\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.883126 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-utilities\") pod \"community-operators-pnqtd\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.883343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-catalog-content\") pod \"community-operators-pnqtd\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.927834 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbsq9\" (UniqueName: \"kubernetes.io/projected/58d15314-573f-476b-ad30-0f8881002188-kube-api-access-vbsq9\") pod \"community-operators-pnqtd\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:44 crc kubenswrapper[4780]: I1205 08:04:44.942408 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:45 crc kubenswrapper[4780]: I1205 08:04:45.226785 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28c190e9-56f3-48c7-a072-4052688197f5","Type":"ContainerStarted","Data":"f341c200facabca1a6c64a95e90d1e8d82e7e763163748fb35c3ac7dcc89d1aa"} Dec 05 08:04:45 crc kubenswrapper[4780]: I1205 08:04:45.614418 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pnqtd"] Dec 05 08:04:46 crc kubenswrapper[4780]: I1205 08:04:46.248736 4780 generic.go:334] "Generic (PLEG): container finished" podID="58d15314-573f-476b-ad30-0f8881002188" containerID="f4acbbcba787405882debc5aefbf01b6249f4cc5ffc2f72b04734cb9a067e6b8" exitCode=0 Dec 05 08:04:46 crc kubenswrapper[4780]: I1205 08:04:46.248995 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqtd" event={"ID":"58d15314-573f-476b-ad30-0f8881002188","Type":"ContainerDied","Data":"f4acbbcba787405882debc5aefbf01b6249f4cc5ffc2f72b04734cb9a067e6b8"} Dec 05 08:04:46 crc kubenswrapper[4780]: I1205 08:04:46.249323 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqtd" event={"ID":"58d15314-573f-476b-ad30-0f8881002188","Type":"ContainerStarted","Data":"86d3bd78702103e27eab4b6e46a0dab4ce8dcd5bfabc5690414bee94d4bf9397"} Dec 05 08:04:47 crc kubenswrapper[4780]: I1205 08:04:47.261837 4780 generic.go:334] "Generic (PLEG): container finished" podID="58d15314-573f-476b-ad30-0f8881002188" containerID="1fd9652ac2b1cfedf798c8ea3724f698e5037d2d17c982909efa233b024f8cc4" exitCode=0 Dec 05 08:04:47 crc kubenswrapper[4780]: I1205 08:04:47.261925 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqtd" event={"ID":"58d15314-573f-476b-ad30-0f8881002188","Type":"ContainerDied","Data":"1fd9652ac2b1cfedf798c8ea3724f698e5037d2d17c982909efa233b024f8cc4"} Dec 05 08:04:48 crc kubenswrapper[4780]: I1205 08:04:48.273378 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqtd" event={"ID":"58d15314-573f-476b-ad30-0f8881002188","Type":"ContainerStarted","Data":"81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58"} Dec 05 08:04:48 crc kubenswrapper[4780]: I1205 08:04:48.298526 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pnqtd" podStartSLOduration=2.873661404 podStartE2EDuration="4.298506729s" podCreationTimestamp="2025-12-05 08:04:44 +0000 UTC" firstStartedPulling="2025-12-05 08:04:46.254162619 +0000 UTC m=+4720.323678951" lastFinishedPulling="2025-12-05 08:04:47.679007944 +0000 UTC m=+4721.748524276" observedRunningTime="2025-12-05 08:04:48.292156233 +0000 UTC m=+4722.361672555" watchObservedRunningTime="2025-12-05 08:04:48.298506729 +0000 UTC m=+4722.368023061" Dec 05 08:04:53 crc kubenswrapper[4780]: I1205 08:04:53.138901 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:04:53 crc kubenswrapper[4780]: E1205 08:04:53.139601 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:04:54 crc kubenswrapper[4780]: I1205 08:04:54.943485 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:54 crc kubenswrapper[4780]: I1205 08:04:54.943903 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:54 crc kubenswrapper[4780]: I1205 08:04:54.991718 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:55 crc kubenswrapper[4780]: I1205 08:04:55.402676 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:04:55 crc kubenswrapper[4780]: I1205 08:04:55.445864 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pnqtd"] Dec 05 08:04:57 crc kubenswrapper[4780]: I1205 08:04:57.362002 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pnqtd" podUID="58d15314-573f-476b-ad30-0f8881002188" containerName="registry-server" containerID="cri-o://81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58" gracePeriod=2 Dec 05 08:04:58 crc kubenswrapper[4780]: I1205 08:04:58.374604 4780 generic.go:334] "Generic (PLEG): container finished" podID="58d15314-573f-476b-ad30-0f8881002188" containerID="81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58" exitCode=0 Dec 05 08:04:58 crc kubenswrapper[4780]: I1205 08:04:58.374662 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqtd" event={"ID":"58d15314-573f-476b-ad30-0f8881002188","Type":"ContainerDied","Data":"81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58"} Dec 05 08:05:04 crc kubenswrapper[4780]: E1205 08:05:04.944075 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58 is running failed: container process not found" containerID="81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 08:05:04 crc kubenswrapper[4780]: E1205 08:05:04.945254 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58 is running failed: container process not found" containerID="81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 08:05:04 crc kubenswrapper[4780]: E1205 08:05:04.945681 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58 is running failed: container process not found" containerID="81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 08:05:04 crc kubenswrapper[4780]: E1205 08:05:04.945712 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-pnqtd" podUID="58d15314-573f-476b-ad30-0f8881002188" containerName="registry-server" Dec 05 08:05:05 crc kubenswrapper[4780]: I1205 08:05:05.139290 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:05:05 crc kubenswrapper[4780]: E1205 08:05:05.139634 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:05:10 crc kubenswrapper[4780]: E1205 08:05:10.843103 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:65066e8ca260a75886ae57f157049605" Dec 05 08:05:10 crc kubenswrapper[4780]: E1205 08:05:10.844405 4780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:65066e8ca260a75886ae57f157049605" Dec 05 08:05:10 crc kubenswrapper[4780]: E1205 08:05:10.844621 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:65066e8ca260a75886ae57f157049605,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkrqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(6c5eef2c-eb08-4610-8639-44aaa305bb28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 08:05:10 crc kubenswrapper[4780]: E1205 08:05:10.846796 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="6c5eef2c-eb08-4610-8639-44aaa305bb28" Dec 05 08:05:11 crc kubenswrapper[4780]: E1205 08:05:11.498756 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:65066e8ca260a75886ae57f157049605\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="6c5eef2c-eb08-4610-8639-44aaa305bb28" Dec 05 08:05:11 crc kubenswrapper[4780]: E1205 08:05:11.510106 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:65066e8ca260a75886ae57f157049605" Dec 05 08:05:11 crc kubenswrapper[4780]: E1205 08:05:11.510171 4780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:65066e8ca260a75886ae57f157049605" Dec 05 08:05:11 crc kubenswrapper[4780]: E1205 08:05:11.510410 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:65066e8ca260a75886ae57f157049605,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nc8h5bfh58bh5dfh89hbh5bch666h5f4h545h567h569h54ch587h657h58bh555h5c8h77hdfhbch5d7hdfh5dbh657h76h5cfh96h7h96hb9h676q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmmph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(28c190e9-56f3-48c7-a072-4052688197f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 08:05:11 crc kubenswrapper[4780]: E1205 08:05:11.511840 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="28c190e9-56f3-48c7-a072-4052688197f5" Dec 05 08:05:11 crc kubenswrapper[4780]: E1205 08:05:11.537002 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:65066e8ca260a75886ae57f157049605" Dec 05 08:05:11 crc kubenswrapper[4780]: E1205 08:05:11.537061 4780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:65066e8ca260a75886ae57f157049605" Dec 05 08:05:11 crc kubenswrapper[4780]: E1205 08:05:11.537219 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:65066e8ca260a75886ae57f157049605,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hg694,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(a5ac8b5d-6f67-4694-a962-a05961a7868c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 08:05:11 crc kubenswrapper[4780]: E1205 08:05:11.538445 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="a5ac8b5d-6f67-4694-a962-a05961a7868c" Dec 05 08:05:12 crc kubenswrapper[4780]: E1205 08:05:12.507700 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:65066e8ca260a75886ae57f157049605\\\"\"" pod="openstack/memcached-0" podUID="28c190e9-56f3-48c7-a072-4052688197f5" Dec 05 08:05:12 crc kubenswrapper[4780]: E1205 08:05:12.507717 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:65066e8ca260a75886ae57f157049605\\\"\"" pod="openstack/rabbitmq-server-0" podUID="a5ac8b5d-6f67-4694-a962-a05961a7868c" Dec 05 08:05:13 crc kubenswrapper[4780]: E1205 08:05:13.404364 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:65066e8ca260a75886ae57f157049605" Dec 05 08:05:13 crc kubenswrapper[4780]: E1205 08:05:13.404419 4780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:65066e8ca260a75886ae57f157049605" Dec 05 08:05:13 crc kubenswrapper[4780]: E1205 08:05:13.404542 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:65066e8ca260a75886ae57f157049605,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blqhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(39137b68-813b-4d2a-b543-730c76488431): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 08:05:13 crc kubenswrapper[4780]: E1205 08:05:13.406712 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="39137b68-813b-4d2a-b543-730c76488431" Dec 05 08:05:13 crc kubenswrapper[4780]: E1205 08:05:13.511901 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:65066e8ca260a75886ae57f157049605\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="39137b68-813b-4d2a-b543-730c76488431" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.036174 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.036593 4780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.036725 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n647h57bh695h68dh54fhf5hc5h67h5d4hb6h696h685h54ch6h599h5c5h679h74h689h644h5c8h64ch555h5c6h5dh569h698h59fh66ch57bh5b9hb7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb9tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-59864b8655-pq4q6_openstack(3345bdcb-2db9-4b87-bda3-0263c740312b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.037740 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.037799 4780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.037850 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-59864b8655-pq4q6" podUID="3345bdcb-2db9-4b87-bda3-0263c740312b" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.037947 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb6hc5h68h68h594h659hdbh679h65ch5f6hdch6h5b9h8fh55hfhf8h57fhc7h56ch687h669h559h678h5dhc7hf7h697h5d6h9ch669h54fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkcvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7859d78785-xzbhx_openstack(603f710c-c15e-4675-86aa-bf9771447da6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.039855 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" podUID="603f710c-c15e-4675-86aa-bf9771447da6" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.044400 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.044437 4780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.044577 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n697h54dhb7h666h69h76h59ch55ch65ch596h8h79h5c8h57hc8hfch5d7h697h79h698h5fch644hf9h54chbfh655hfchcbh5f8h646h5f7h89q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwjmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-f568b98b7-pxzjb_openstack(cf03be4c-26ac-49ed-b103-53c0d2f15eb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.045733 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" podUID="cf03be4c-26ac-49ed-b103-53c0d2f15eb5" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.056724 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.061703 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:65066e8ca260a75886ae57f157049605" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.061761 4780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:65066e8ca260a75886ae57f157049605" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.061905 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:65066e8ca260a75886ae57f157049605,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjhnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(ff360368-93b6-4ab0-b1d6-e53682ec9336): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.063502 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="ff360368-93b6-4ab0-b1d6-e53682ec9336" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.089426 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.089487 4780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.089604 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n564h564h676h699hcdh67bh66hfdh569h545h648h94h546h696h668h89h96h667h575h595h5d9h584h8dhbdh697h54bhb7h58fh5c9hd8h5cdh5c7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwtjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84496478f-9kwnv_openstack(d0d508c0-39e1-48e4-8e81-86ac0c4ee01b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.091708 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84496478f-9kwnv" podUID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.191046 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-catalog-content\") pod \"58d15314-573f-476b-ad30-0f8881002188\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.191127 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbsq9\" (UniqueName: \"kubernetes.io/projected/58d15314-573f-476b-ad30-0f8881002188-kube-api-access-vbsq9\") pod \"58d15314-573f-476b-ad30-0f8881002188\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.191243 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-utilities\") pod \"58d15314-573f-476b-ad30-0f8881002188\" (UID: \"58d15314-573f-476b-ad30-0f8881002188\") " Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.192196 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-utilities" (OuterVolumeSpecName: "utilities") pod "58d15314-573f-476b-ad30-0f8881002188" (UID: "58d15314-573f-476b-ad30-0f8881002188"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.196423 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d15314-573f-476b-ad30-0f8881002188-kube-api-access-vbsq9" (OuterVolumeSpecName: "kube-api-access-vbsq9") pod "58d15314-573f-476b-ad30-0f8881002188" (UID: "58d15314-573f-476b-ad30-0f8881002188"). InnerVolumeSpecName "kube-api-access-vbsq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.243611 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58d15314-573f-476b-ad30-0f8881002188" (UID: "58d15314-573f-476b-ad30-0f8881002188"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.293249 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbsq9\" (UniqueName: \"kubernetes.io/projected/58d15314-573f-476b-ad30-0f8881002188-kube-api-access-vbsq9\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.293281 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.293290 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15314-573f-476b-ad30-0f8881002188-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.518722 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pnqtd" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.518732 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pnqtd" event={"ID":"58d15314-573f-476b-ad30-0f8881002188","Type":"ContainerDied","Data":"86d3bd78702103e27eab4b6e46a0dab4ce8dcd5bfabc5690414bee94d4bf9397"} Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.518784 4780 scope.go:117] "RemoveContainer" containerID="81875a1f7a4f002f121d9eb660688487341bfaeb13b57267bdfa04eb47039c58" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.533311 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:65066e8ca260a75886ae57f157049605\\\"\"" pod="openstack/openstack-galera-0" podUID="ff360368-93b6-4ab0-b1d6-e53682ec9336" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.533568 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605\\\"\"" pod="openstack/dnsmasq-dns-84496478f-9kwnv" podUID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" Dec 05 08:05:14 crc kubenswrapper[4780]: E1205 08:05:14.545508 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:65066e8ca260a75886ae57f157049605\\\"\"" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" podUID="603f710c-c15e-4675-86aa-bf9771447da6" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.593335 4780 scope.go:117] "RemoveContainer" containerID="1fd9652ac2b1cfedf798c8ea3724f698e5037d2d17c982909efa233b024f8cc4" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.639947 4780 scope.go:117] "RemoveContainer" containerID="f4acbbcba787405882debc5aefbf01b6249f4cc5ffc2f72b04734cb9a067e6b8" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.664142 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pnqtd"] Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.670908 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pnqtd"] Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.861820 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59864b8655-pq4q6" Dec 05 08:05:14 crc kubenswrapper[4780]: I1205 08:05:14.945329 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.004638 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb9tt\" (UniqueName: \"kubernetes.io/projected/3345bdcb-2db9-4b87-bda3-0263c740312b-kube-api-access-kb9tt\") pod \"3345bdcb-2db9-4b87-bda3-0263c740312b\" (UID: \"3345bdcb-2db9-4b87-bda3-0263c740312b\") " Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.004818 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3345bdcb-2db9-4b87-bda3-0263c740312b-config\") pod \"3345bdcb-2db9-4b87-bda3-0263c740312b\" (UID: \"3345bdcb-2db9-4b87-bda3-0263c740312b\") " Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.005382 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3345bdcb-2db9-4b87-bda3-0263c740312b-config" (OuterVolumeSpecName: "config") pod "3345bdcb-2db9-4b87-bda3-0263c740312b" (UID: "3345bdcb-2db9-4b87-bda3-0263c740312b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.008326 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3345bdcb-2db9-4b87-bda3-0263c740312b-kube-api-access-kb9tt" (OuterVolumeSpecName: "kube-api-access-kb9tt") pod "3345bdcb-2db9-4b87-bda3-0263c740312b" (UID: "3345bdcb-2db9-4b87-bda3-0263c740312b"). InnerVolumeSpecName "kube-api-access-kb9tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.106841 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-config\") pod \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.107159 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-dns-svc\") pod \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.107284 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwjmc\" (UniqueName: \"kubernetes.io/projected/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-kube-api-access-hwjmc\") pod \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\" (UID: \"cf03be4c-26ac-49ed-b103-53c0d2f15eb5\") " Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.107599 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-config" (OuterVolumeSpecName: "config") pod "cf03be4c-26ac-49ed-b103-53c0d2f15eb5" (UID: "cf03be4c-26ac-49ed-b103-53c0d2f15eb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.107630 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf03be4c-26ac-49ed-b103-53c0d2f15eb5" (UID: "cf03be4c-26ac-49ed-b103-53c0d2f15eb5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.108349 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.108366 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3345bdcb-2db9-4b87-bda3-0263c740312b-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.108378 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.108388 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb9tt\" (UniqueName: \"kubernetes.io/projected/3345bdcb-2db9-4b87-bda3-0263c740312b-kube-api-access-kb9tt\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.109962 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-kube-api-access-hwjmc" (OuterVolumeSpecName: "kube-api-access-hwjmc") pod "cf03be4c-26ac-49ed-b103-53c0d2f15eb5" (UID: "cf03be4c-26ac-49ed-b103-53c0d2f15eb5"). InnerVolumeSpecName "kube-api-access-hwjmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.209720 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwjmc\" (UniqueName: \"kubernetes.io/projected/cf03be4c-26ac-49ed-b103-53c0d2f15eb5-kube-api-access-hwjmc\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.525403 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.527000 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f568b98b7-pxzjb" event={"ID":"cf03be4c-26ac-49ed-b103-53c0d2f15eb5","Type":"ContainerDied","Data":"94cbe0de4df42df3e382a6433bab7c72400bfe5679ce9dbab8d943ad9d81f1fb"} Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.528867 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59864b8655-pq4q6" event={"ID":"3345bdcb-2db9-4b87-bda3-0263c740312b","Type":"ContainerDied","Data":"11f54efd79d581d45e24089d40168b43270bbfaff1fac1090fafda810b3c0772"} Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.528954 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59864b8655-pq4q6" Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.576188 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f568b98b7-pxzjb"] Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.584291 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f568b98b7-pxzjb"] Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.617636 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59864b8655-pq4q6"] Dec 05 08:05:15 crc kubenswrapper[4780]: I1205 08:05:15.622594 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59864b8655-pq4q6"] Dec 05 08:05:16 crc kubenswrapper[4780]: I1205 08:05:16.150159 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3345bdcb-2db9-4b87-bda3-0263c740312b" path="/var/lib/kubelet/pods/3345bdcb-2db9-4b87-bda3-0263c740312b/volumes" Dec 05 08:05:16 crc kubenswrapper[4780]: I1205 08:05:16.150665 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d15314-573f-476b-ad30-0f8881002188" path="/var/lib/kubelet/pods/58d15314-573f-476b-ad30-0f8881002188/volumes" Dec 05 08:05:16 crc kubenswrapper[4780]: I1205 08:05:16.151515 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf03be4c-26ac-49ed-b103-53c0d2f15eb5" path="/var/lib/kubelet/pods/cf03be4c-26ac-49ed-b103-53c0d2f15eb5/volumes" Dec 05 08:05:20 crc kubenswrapper[4780]: I1205 08:05:20.140743 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:05:20 crc kubenswrapper[4780]: E1205 08:05:20.141431 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:05:23 crc kubenswrapper[4780]: I1205 08:05:23.620500 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6c5eef2c-eb08-4610-8639-44aaa305bb28","Type":"ContainerStarted","Data":"598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec"} Dec 05 08:05:25 crc kubenswrapper[4780]: I1205 08:05:25.665332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"39137b68-813b-4d2a-b543-730c76488431","Type":"ContainerStarted","Data":"da1386b966d422dfecffce4e577be60332b1cc9b6e9d48ab5244a72152448fd2"} Dec 05 08:05:27 crc kubenswrapper[4780]: I1205 08:05:27.680466 4780 generic.go:334] "Generic (PLEG): container finished" podID="603f710c-c15e-4675-86aa-bf9771447da6" containerID="5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e" exitCode=0 Dec 05 08:05:27 crc kubenswrapper[4780]: I1205 08:05:27.680551 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" event={"ID":"603f710c-c15e-4675-86aa-bf9771447da6","Type":"ContainerDied","Data":"5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e"} Dec 05 08:05:27 crc kubenswrapper[4780]: I1205 08:05:27.683934 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28c190e9-56f3-48c7-a072-4052688197f5","Type":"ContainerStarted","Data":"173ff44011230f6412643b25dbad7872286b7a82508a7b0560a8aeed6c6b31ac"} Dec 05 08:05:27 crc kubenswrapper[4780]: I1205 08:05:27.684155 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 05 08:05:27 crc kubenswrapper[4780]: I1205 08:05:27.686021 4780 generic.go:334] "Generic (PLEG): container finished" podID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" containerID="2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366" exitCode=0 Dec 05 08:05:27 crc kubenswrapper[4780]: I1205 08:05:27.686052 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84496478f-9kwnv" event={"ID":"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b","Type":"ContainerDied","Data":"2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366"} Dec 05 08:05:27 crc kubenswrapper[4780]: I1205 08:05:27.737916 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.647097474 podStartE2EDuration="44.737894895s" podCreationTimestamp="2025-12-05 08:04:43 +0000 UTC" firstStartedPulling="2025-12-05 08:04:44.290997129 +0000 UTC m=+4718.360513471" lastFinishedPulling="2025-12-05 08:05:27.38179456 +0000 UTC m=+4761.451310892" observedRunningTime="2025-12-05 08:05:27.732591749 +0000 UTC m=+4761.802108081" watchObservedRunningTime="2025-12-05 08:05:27.737894895 +0000 UTC m=+4761.807411237" Dec 05 08:05:28 crc kubenswrapper[4780]: I1205 08:05:28.695089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84496478f-9kwnv" event={"ID":"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b","Type":"ContainerStarted","Data":"b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21"} Dec 05 08:05:28 crc kubenswrapper[4780]: I1205 08:05:28.696283 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:05:28 crc kubenswrapper[4780]: I1205 08:05:28.697456 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" event={"ID":"603f710c-c15e-4675-86aa-bf9771447da6","Type":"ContainerStarted","Data":"8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e"} Dec 05 08:05:28 crc kubenswrapper[4780]: I1205 08:05:28.697719 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:05:28 crc kubenswrapper[4780]: I1205 08:05:28.722596 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84496478f-9kwnv" podStartSLOduration=2.767826495 podStartE2EDuration="49.722579246s" podCreationTimestamp="2025-12-05 08:04:39 +0000 UTC" firstStartedPulling="2025-12-05 08:04:40.344370607 +0000 UTC m=+4714.413886939" lastFinishedPulling="2025-12-05 08:05:27.299123358 +0000 UTC m=+4761.368639690" observedRunningTime="2025-12-05 08:05:28.719510731 +0000 UTC m=+4762.789027063" watchObservedRunningTime="2025-12-05 08:05:28.722579246 +0000 UTC m=+4762.792095588" Dec 05 08:05:28 crc kubenswrapper[4780]: I1205 08:05:28.741159 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" podStartSLOduration=2.345012317 podStartE2EDuration="49.741141708s" podCreationTimestamp="2025-12-05 08:04:39 +0000 UTC" firstStartedPulling="2025-12-05 08:04:39.958321464 +0000 UTC m=+4714.027837796" lastFinishedPulling="2025-12-05 08:05:27.354450855 +0000 UTC m=+4761.423967187" observedRunningTime="2025-12-05 08:05:28.738417954 +0000 UTC m=+4762.807934286" watchObservedRunningTime="2025-12-05 08:05:28.741141708 +0000 UTC m=+4762.810658030" Dec 05 08:05:29 crc kubenswrapper[4780]: I1205 08:05:29.727813 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ff360368-93b6-4ab0-b1d6-e53682ec9336","Type":"ContainerStarted","Data":"b7c1abf8f124cfb1669d009f522bcc43dd2cc67a8b7c44609a9a28f9dd734e07"} Dec 05 08:05:29 crc kubenswrapper[4780]: I1205 08:05:29.730058 4780 generic.go:334] "Generic (PLEG): container finished" podID="39137b68-813b-4d2a-b543-730c76488431" containerID="da1386b966d422dfecffce4e577be60332b1cc9b6e9d48ab5244a72152448fd2" exitCode=0 Dec 05 08:05:29 crc kubenswrapper[4780]: I1205 08:05:29.730125 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"39137b68-813b-4d2a-b543-730c76488431","Type":"ContainerDied","Data":"da1386b966d422dfecffce4e577be60332b1cc9b6e9d48ab5244a72152448fd2"} Dec 05 08:05:29 crc kubenswrapper[4780]: I1205 08:05:29.733453 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5ac8b5d-6f67-4694-a962-a05961a7868c","Type":"ContainerStarted","Data":"a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760"} Dec 05 08:05:30 crc kubenswrapper[4780]: I1205 08:05:30.741134 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"39137b68-813b-4d2a-b543-730c76488431","Type":"ContainerStarted","Data":"f60840fd9d7bcfc51886ee3f68c5e5751e351ef0380c9966f88c58eb3e3f381a"} Dec 05 08:05:33 crc kubenswrapper[4780]: I1205 08:05:33.138722 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:05:33 crc kubenswrapper[4780]: E1205 08:05:33.139221 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:05:33 crc kubenswrapper[4780]: I1205 08:05:33.247115 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 05 08:05:33 crc kubenswrapper[4780]: I1205 08:05:33.247166 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 05 08:05:33 crc kubenswrapper[4780]: I1205 08:05:33.792612 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 05 08:05:33 crc kubenswrapper[4780]: I1205 08:05:33.810328 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=11.397616366 podStartE2EDuration="52.810308333s" podCreationTimestamp="2025-12-05 08:04:41 +0000 UTC" firstStartedPulling="2025-12-05 08:04:43.927972401 +0000 UTC m=+4717.997488733" lastFinishedPulling="2025-12-05 08:05:25.340664368 +0000 UTC m=+4759.410180700" observedRunningTime="2025-12-05 08:05:30.763686797 +0000 UTC m=+4764.833203149" watchObservedRunningTime="2025-12-05 08:05:33.810308333 +0000 UTC m=+4767.879824675" Dec 05 08:05:33 crc kubenswrapper[4780]: I1205 08:05:33.980348 4780 generic.go:334] "Generic (PLEG): container finished" podID="ff360368-93b6-4ab0-b1d6-e53682ec9336" containerID="b7c1abf8f124cfb1669d009f522bcc43dd2cc67a8b7c44609a9a28f9dd734e07" exitCode=0 Dec 05 08:05:33 crc kubenswrapper[4780]: I1205 08:05:33.980411 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ff360368-93b6-4ab0-b1d6-e53682ec9336","Type":"ContainerDied","Data":"b7c1abf8f124cfb1669d009f522bcc43dd2cc67a8b7c44609a9a28f9dd734e07"} Dec 05 08:05:34 crc kubenswrapper[4780]: I1205 08:05:34.445014 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:05:34 crc kubenswrapper[4780]: I1205 08:05:34.733024 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:05:34 crc kubenswrapper[4780]: I1205 08:05:34.827464 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7859d78785-xzbhx"] Dec 05 08:05:34 crc kubenswrapper[4780]: I1205 08:05:34.991028 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ff360368-93b6-4ab0-b1d6-e53682ec9336","Type":"ContainerStarted","Data":"8d8b7f9dc4e5c64861f194f12e58a277b20e00ae22fc1e11ed57b0316e44f5ab"} Dec 05 08:05:34 crc kubenswrapper[4780]: I1205 08:05:34.991163 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" podUID="603f710c-c15e-4675-86aa-bf9771447da6" containerName="dnsmasq-dns" containerID="cri-o://8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e" gracePeriod=10 Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.013215 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371981.841578 podStartE2EDuration="55.013198594s" podCreationTimestamp="2025-12-05 08:04:40 +0000 UTC" firstStartedPulling="2025-12-05 08:04:42.352083467 +0000 UTC m=+4716.421599809" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:05:35.012941237 +0000 UTC m=+4769.082457569" watchObservedRunningTime="2025-12-05 08:05:35.013198594 +0000 UTC m=+4769.082714926" Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.412168 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.514802 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-config\") pod \"603f710c-c15e-4675-86aa-bf9771447da6\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.515652 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkcvd\" (UniqueName: \"kubernetes.io/projected/603f710c-c15e-4675-86aa-bf9771447da6-kube-api-access-hkcvd\") pod \"603f710c-c15e-4675-86aa-bf9771447da6\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.515817 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-dns-svc\") pod \"603f710c-c15e-4675-86aa-bf9771447da6\" (UID: \"603f710c-c15e-4675-86aa-bf9771447da6\") " Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.523123 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603f710c-c15e-4675-86aa-bf9771447da6-kube-api-access-hkcvd" (OuterVolumeSpecName: "kube-api-access-hkcvd") pod "603f710c-c15e-4675-86aa-bf9771447da6" (UID: "603f710c-c15e-4675-86aa-bf9771447da6"). InnerVolumeSpecName "kube-api-access-hkcvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.563033 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-config" (OuterVolumeSpecName: "config") pod "603f710c-c15e-4675-86aa-bf9771447da6" (UID: "603f710c-c15e-4675-86aa-bf9771447da6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.570425 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "603f710c-c15e-4675-86aa-bf9771447da6" (UID: "603f710c-c15e-4675-86aa-bf9771447da6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.619448 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.620329 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603f710c-c15e-4675-86aa-bf9771447da6-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:35 crc kubenswrapper[4780]: I1205 08:05:35.620419 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkcvd\" (UniqueName: \"kubernetes.io/projected/603f710c-c15e-4675-86aa-bf9771447da6-kube-api-access-hkcvd\") on node \"crc\" DevicePath \"\"" Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.001101 4780 generic.go:334] "Generic (PLEG): container finished" podID="603f710c-c15e-4675-86aa-bf9771447da6" containerID="8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e" exitCode=0 Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.001143 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.001142 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" event={"ID":"603f710c-c15e-4675-86aa-bf9771447da6","Type":"ContainerDied","Data":"8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e"} Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.001308 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859d78785-xzbhx" event={"ID":"603f710c-c15e-4675-86aa-bf9771447da6","Type":"ContainerDied","Data":"57500d156fbfac9c89660659f5c64fca39768c48cdb5174dca9de355adf20f80"} Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.001357 4780 scope.go:117] "RemoveContainer" containerID="8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e" Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.030496 4780 scope.go:117] "RemoveContainer" containerID="5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e" Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.036691 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7859d78785-xzbhx"] Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.065364 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7859d78785-xzbhx"] Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.075128 4780 scope.go:117] "RemoveContainer" containerID="8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e" Dec 05 08:05:36 crc kubenswrapper[4780]: E1205 08:05:36.078206 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e\": container with ID starting with 8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e not found: ID does not exist" containerID="8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e" Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.078325 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e"} err="failed to get container status \"8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e\": rpc error: code = NotFound desc = could not find container \"8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e\": container with ID starting with 8a49991a3b73481e90b5a677d4eb2eababaa2143b852fb36c13a2ee363276e6e not found: ID does not exist" Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.078360 4780 scope.go:117] "RemoveContainer" containerID="5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e" Dec 05 08:05:36 crc kubenswrapper[4780]: E1205 08:05:36.078836 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e\": container with ID starting with 5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e not found: ID does not exist" containerID="5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e" Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.078865 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e"} err="failed to get container status \"5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e\": rpc error: code = NotFound desc = could not find container \"5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e\": container with ID starting with 5a8ea86d6e6732287001787529a1094a17a1e58d162735f1ef145f20f02e932e not found: ID does not exist" Dec 05 08:05:36 crc kubenswrapper[4780]: I1205 08:05:36.147852 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603f710c-c15e-4675-86aa-bf9771447da6" path="/var/lib/kubelet/pods/603f710c-c15e-4675-86aa-bf9771447da6/volumes" Dec 05 08:05:37 crc kubenswrapper[4780]: I1205 08:05:37.322328 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 05 08:05:37 crc kubenswrapper[4780]: I1205 08:05:37.421966 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 05 08:05:41 crc kubenswrapper[4780]: I1205 08:05:41.702438 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 05 08:05:41 crc kubenswrapper[4780]: I1205 08:05:41.703315 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 05 08:05:41 crc kubenswrapper[4780]: I1205 08:05:41.796612 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 05 08:05:42 crc kubenswrapper[4780]: I1205 08:05:42.184192 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 05 08:05:47 crc kubenswrapper[4780]: I1205 08:05:47.139956 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:05:47 crc kubenswrapper[4780]: E1205 08:05:47.140836 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:05:57 crc kubenswrapper[4780]: I1205 08:05:57.167226 4780 generic.go:334] "Generic (PLEG): container finished" podID="6c5eef2c-eb08-4610-8639-44aaa305bb28" containerID="598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec" exitCode=0 Dec 05 08:05:57 crc kubenswrapper[4780]: I1205 08:05:57.167913 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6c5eef2c-eb08-4610-8639-44aaa305bb28","Type":"ContainerDied","Data":"598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec"} Dec 05 08:05:58 crc kubenswrapper[4780]: I1205 08:05:58.139455 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:05:58 crc kubenswrapper[4780]: E1205 08:05:58.139966 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:05:58 crc kubenswrapper[4780]: I1205 08:05:58.176840 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6c5eef2c-eb08-4610-8639-44aaa305bb28","Type":"ContainerStarted","Data":"2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5"} Dec 05 08:05:58 crc kubenswrapper[4780]: I1205 08:05:58.178013 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:05:58 crc kubenswrapper[4780]: I1205 08:05:58.203927 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.628740698 podStartE2EDuration="1m19.203904625s" podCreationTimestamp="2025-12-05 08:04:39 +0000 UTC" firstStartedPulling="2025-12-05 08:04:41.772870584 +0000 UTC m=+4715.842386916" lastFinishedPulling="2025-12-05 08:05:22.348034511 +0000 UTC m=+4756.417550843" observedRunningTime="2025-12-05 08:05:58.202414484 +0000 UTC m=+4792.271930826" watchObservedRunningTime="2025-12-05 08:05:58.203904625 +0000 UTC m=+4792.273421117" Dec 05 08:06:01 crc kubenswrapper[4780]: I1205 08:06:01.200053 4780 generic.go:334] "Generic (PLEG): container finished" podID="a5ac8b5d-6f67-4694-a962-a05961a7868c" containerID="a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760" exitCode=0 Dec 05 08:06:01 crc kubenswrapper[4780]: I1205 08:06:01.200129 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5ac8b5d-6f67-4694-a962-a05961a7868c","Type":"ContainerDied","Data":"a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760"} Dec 05 08:06:02 crc kubenswrapper[4780]: I1205 08:06:02.208112 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5ac8b5d-6f67-4694-a962-a05961a7868c","Type":"ContainerStarted","Data":"c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0"} Dec 05 08:06:02 crc kubenswrapper[4780]: I1205 08:06:02.208850 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 05 08:06:02 crc kubenswrapper[4780]: I1205 08:06:02.229253 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371953.625542 podStartE2EDuration="1m23.229234807s" podCreationTimestamp="2025-12-05 08:04:39 +0000 UTC" firstStartedPulling="2025-12-05 08:04:41.836344186 +0000 UTC m=+4715.905860528" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:06:02.228174328 +0000 UTC m=+4796.297690660" watchObservedRunningTime="2025-12-05 08:06:02.229234807 +0000 UTC m=+4796.298751139" Dec 05 08:06:10 crc kubenswrapper[4780]: I1205 08:06:10.916076 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:11 crc kubenswrapper[4780]: I1205 08:06:11.191092 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 05 08:06:13 crc kubenswrapper[4780]: I1205 08:06:13.138788 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:06:13 crc kubenswrapper[4780]: E1205 08:06:13.139311 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.147211 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-778d75ccf7-8zt8b"] Dec 05 08:06:18 crc kubenswrapper[4780]: E1205 08:06:18.149064 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d15314-573f-476b-ad30-0f8881002188" containerName="registry-server" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.149185 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d15314-573f-476b-ad30-0f8881002188" containerName="registry-server" Dec 05 08:06:18 crc kubenswrapper[4780]: E1205 08:06:18.149276 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603f710c-c15e-4675-86aa-bf9771447da6" containerName="init" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.149364 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="603f710c-c15e-4675-86aa-bf9771447da6" containerName="init" Dec 05 08:06:18 crc kubenswrapper[4780]: E1205 08:06:18.149471 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603f710c-c15e-4675-86aa-bf9771447da6" containerName="dnsmasq-dns" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.149603 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="603f710c-c15e-4675-86aa-bf9771447da6" containerName="dnsmasq-dns" Dec 05 08:06:18 crc kubenswrapper[4780]: E1205 08:06:18.149717 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d15314-573f-476b-ad30-0f8881002188" containerName="extract-utilities" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.149812 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d15314-573f-476b-ad30-0f8881002188" containerName="extract-utilities" Dec 05 08:06:18 crc kubenswrapper[4780]: E1205 08:06:18.149938 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d15314-573f-476b-ad30-0f8881002188" containerName="extract-content" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.150031 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d15314-573f-476b-ad30-0f8881002188" containerName="extract-content" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.150313 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d15314-573f-476b-ad30-0f8881002188" containerName="registry-server" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.150414 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="603f710c-c15e-4675-86aa-bf9771447da6" containerName="dnsmasq-dns" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.151534 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.156658 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-778d75ccf7-8zt8b"] Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.311847 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzrfg\" (UniqueName: \"kubernetes.io/projected/ceee5562-3cff-4c1c-a163-d40e75cacedb-kube-api-access-bzrfg\") pod \"dnsmasq-dns-778d75ccf7-8zt8b\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.311955 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-config\") pod \"dnsmasq-dns-778d75ccf7-8zt8b\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.312025 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-dns-svc\") pod \"dnsmasq-dns-778d75ccf7-8zt8b\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.413299 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzrfg\" (UniqueName: \"kubernetes.io/projected/ceee5562-3cff-4c1c-a163-d40e75cacedb-kube-api-access-bzrfg\") pod \"dnsmasq-dns-778d75ccf7-8zt8b\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.413385 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-config\") pod \"dnsmasq-dns-778d75ccf7-8zt8b\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.413433 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-dns-svc\") pod \"dnsmasq-dns-778d75ccf7-8zt8b\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.414519 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-dns-svc\") pod \"dnsmasq-dns-778d75ccf7-8zt8b\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.415038 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-config\") pod \"dnsmasq-dns-778d75ccf7-8zt8b\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.432403 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzrfg\" (UniqueName: \"kubernetes.io/projected/ceee5562-3cff-4c1c-a163-d40e75cacedb-kube-api-access-bzrfg\") pod \"dnsmasq-dns-778d75ccf7-8zt8b\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.472288 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.745325 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-778d75ccf7-8zt8b"] Dec 05 08:06:18 crc kubenswrapper[4780]: I1205 08:06:18.815873 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 08:06:19 crc kubenswrapper[4780]: I1205 08:06:19.360861 4780 generic.go:334] "Generic (PLEG): container finished" podID="ceee5562-3cff-4c1c-a163-d40e75cacedb" containerID="03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d" exitCode=0 Dec 05 08:06:19 crc kubenswrapper[4780]: I1205 08:06:19.360920 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" event={"ID":"ceee5562-3cff-4c1c-a163-d40e75cacedb","Type":"ContainerDied","Data":"03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d"} Dec 05 08:06:19 crc kubenswrapper[4780]: I1205 08:06:19.360948 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" event={"ID":"ceee5562-3cff-4c1c-a163-d40e75cacedb","Type":"ContainerStarted","Data":"6c88af2d1e5d6b29c80179376c7e88c85d9460ac95e3123bcf0f9997283286b6"} Dec 05 08:06:19 crc kubenswrapper[4780]: I1205 08:06:19.457814 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 08:06:20 crc kubenswrapper[4780]: I1205 08:06:20.369094 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" event={"ID":"ceee5562-3cff-4c1c-a163-d40e75cacedb","Type":"ContainerStarted","Data":"fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b"} Dec 05 08:06:20 crc kubenswrapper[4780]: I1205 08:06:20.369531 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:20 crc kubenswrapper[4780]: I1205 08:06:20.391407 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" podStartSLOduration=2.39138771 podStartE2EDuration="2.39138771s" podCreationTimestamp="2025-12-05 08:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:06:20.385503169 +0000 UTC m=+4814.455019511" watchObservedRunningTime="2025-12-05 08:06:20.39138771 +0000 UTC m=+4814.460904042" Dec 05 08:06:22 crc kubenswrapper[4780]: I1205 08:06:22.634452 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a5ac8b5d-6f67-4694-a962-a05961a7868c" containerName="rabbitmq" containerID="cri-o://c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0" gracePeriod=604797 Dec 05 08:06:23 crc kubenswrapper[4780]: I1205 08:06:23.222070 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6c5eef2c-eb08-4610-8639-44aaa305bb28" containerName="rabbitmq" containerID="cri-o://2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5" gracePeriod=604797 Dec 05 08:06:24 crc kubenswrapper[4780]: I1205 08:06:24.138586 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:06:24 crc kubenswrapper[4780]: E1205 08:06:24.138854 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:06:28 crc kubenswrapper[4780]: I1205 08:06:28.474411 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:06:28 crc kubenswrapper[4780]: I1205 08:06:28.525166 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84496478f-9kwnv"] Dec 05 08:06:28 crc kubenswrapper[4780]: I1205 08:06:28.525823 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84496478f-9kwnv" podUID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" containerName="dnsmasq-dns" containerID="cri-o://b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21" gracePeriod=10 Dec 05 08:06:28 crc kubenswrapper[4780]: I1205 08:06:28.981226 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.071328 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-config\") pod \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.071509 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwtjc\" (UniqueName: \"kubernetes.io/projected/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-kube-api-access-qwtjc\") pod \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.071575 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-dns-svc\") pod \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\" (UID: \"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.080249 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-kube-api-access-qwtjc" (OuterVolumeSpecName: "kube-api-access-qwtjc") pod "d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" (UID: "d0d508c0-39e1-48e4-8e81-86ac0c4ee01b"). InnerVolumeSpecName "kube-api-access-qwtjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.108052 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" (UID: "d0d508c0-39e1-48e4-8e81-86ac0c4ee01b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.113021 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-config" (OuterVolumeSpecName: "config") pod "d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" (UID: "d0d508c0-39e1-48e4-8e81-86ac0c4ee01b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.113737 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.173535 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.173582 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.173591 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwtjc\" (UniqueName: \"kubernetes.io/projected/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b-kube-api-access-qwtjc\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.274734 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-erlang-cookie\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.274894 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.274922 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-config-data\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.274965 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ac8b5d-6f67-4694-a962-a05961a7868c-erlang-cookie-secret\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.274988 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-plugins-conf\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.275055 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ac8b5d-6f67-4694-a962-a05961a7868c-pod-info\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.275104 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-plugins\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.275126 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-confd\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.275173 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg694\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-kube-api-access-hg694\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.275206 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-server-conf\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.275226 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-tls\") pod \"a5ac8b5d-6f67-4694-a962-a05961a7868c\" (UID: \"a5ac8b5d-6f67-4694-a962-a05961a7868c\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.275435 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.276003 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.276986 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.277311 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.279980 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-kube-api-access-hg694" (OuterVolumeSpecName: "kube-api-access-hg694") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "kube-api-access-hg694". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.284240 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a5ac8b5d-6f67-4694-a962-a05961a7868c-pod-info" (OuterVolumeSpecName: "pod-info") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.284456 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ac8b5d-6f67-4694-a962-a05961a7868c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.284587 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.293051 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8" (OuterVolumeSpecName: "persistence") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.298192 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-config-data" (OuterVolumeSpecName: "config-data") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.310432 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-server-conf" (OuterVolumeSpecName: "server-conf") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.353230 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a5ac8b5d-6f67-4694-a962-a05961a7868c" (UID: "a5ac8b5d-6f67-4694-a962-a05961a7868c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.377063 4780 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5ac8b5d-6f67-4694-a962-a05961a7868c-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.377098 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.377109 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.377119 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg694\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-kube-api-access-hg694\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.377128 4780 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.377136 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5ac8b5d-6f67-4694-a962-a05961a7868c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.377174 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") on node \"crc\" " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.377188 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.377200 4780 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5ac8b5d-6f67-4694-a962-a05961a7868c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.377209 4780 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5ac8b5d-6f67-4694-a962-a05961a7868c-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.396668 4780 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.396840 4780 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8") on node "crc" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.430900 4780 generic.go:334] "Generic (PLEG): container finished" podID="a5ac8b5d-6f67-4694-a962-a05961a7868c" containerID="c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0" exitCode=0 Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.430970 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5ac8b5d-6f67-4694-a962-a05961a7868c","Type":"ContainerDied","Data":"c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0"} Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.430977 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.430995 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a5ac8b5d-6f67-4694-a962-a05961a7868c","Type":"ContainerDied","Data":"2755e0d24949901fa33a057a72e59b4b64067608fe34f6fec65875dfed57bc46"} Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.431014 4780 scope.go:117] "RemoveContainer" containerID="c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.434410 4780 generic.go:334] "Generic (PLEG): container finished" podID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" containerID="b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21" exitCode=0 Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.434457 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84496478f-9kwnv" event={"ID":"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b","Type":"ContainerDied","Data":"b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21"} Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.434515 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84496478f-9kwnv" event={"ID":"d0d508c0-39e1-48e4-8e81-86ac0c4ee01b","Type":"ContainerDied","Data":"8b225b132f4efbad522b41ccd44067fbdbe481c5af333ebef2ba89d0cb799cd7"} Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.434603 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84496478f-9kwnv" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.468344 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.475904 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.484670 4780 reconciler_common.go:293] "Volume detached for volume \"pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.487682 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84496478f-9kwnv"] Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.498660 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84496478f-9kwnv"] Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.505347 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 08:06:29 crc kubenswrapper[4780]: E1205 08:06:29.505763 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" containerName="init" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.505782 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" containerName="init" Dec 05 08:06:29 crc kubenswrapper[4780]: E1205 08:06:29.505826 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" containerName="dnsmasq-dns" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.505832 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" containerName="dnsmasq-dns" Dec 05 08:06:29 crc kubenswrapper[4780]: E1205 08:06:29.505848 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ac8b5d-6f67-4694-a962-a05961a7868c" containerName="rabbitmq" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.505854 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ac8b5d-6f67-4694-a962-a05961a7868c" containerName="rabbitmq" Dec 05 08:06:29 crc kubenswrapper[4780]: E1205 08:06:29.505865 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ac8b5d-6f67-4694-a962-a05961a7868c" containerName="setup-container" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.505871 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ac8b5d-6f67-4694-a962-a05961a7868c" containerName="setup-container" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.506030 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ac8b5d-6f67-4694-a962-a05961a7868c" containerName="rabbitmq" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.506047 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" containerName="dnsmasq-dns" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.506821 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.511969 4780 scope.go:117] "RemoveContainer" containerID="a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.512199 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.512199 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.512468 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.512773 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-27xcl" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.512968 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.515244 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.518467 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.526725 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.551204 4780 scope.go:117] "RemoveContainer" containerID="c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0" Dec 05 08:06:29 crc kubenswrapper[4780]: E1205 08:06:29.551995 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0\": container with ID starting with c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0 not found: ID does not exist" containerID="c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.552035 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0"} err="failed to get container status \"c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0\": rpc error: code = NotFound desc = could not find container \"c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0\": container with ID starting with c28af75e2f9e4c965b02fbe49603069dac81be2dc5e59e6cc849e23a25d8d0d0 not found: ID does not exist" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.552061 4780 scope.go:117] "RemoveContainer" containerID="a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760" Dec 05 08:06:29 crc kubenswrapper[4780]: E1205 08:06:29.552464 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760\": container with ID starting with a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760 not found: ID does not exist" containerID="a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.552492 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760"} err="failed to get container status \"a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760\": rpc error: code = NotFound desc = could not find container \"a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760\": container with ID starting with a5cb18ae4dd5ca8f455ab41c47430a42c5a810c572d1a7b6bdaa1c0e5c216760 not found: ID does not exist" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.552509 4780 scope.go:117] "RemoveContainer" containerID="b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.567135 4780 scope.go:117] "RemoveContainer" containerID="2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.597963 4780 scope.go:117] "RemoveContainer" containerID="b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21" Dec 05 08:06:29 crc kubenswrapper[4780]: E1205 08:06:29.598423 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21\": container with ID starting with b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21 not found: ID does not exist" containerID="b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.598494 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21"} err="failed to get container status \"b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21\": rpc error: code = NotFound desc = could not find container \"b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21\": container with ID starting with b2cf908d31a51d08efad581f754a850942ca26a366e3a2a7f3e8121310fa4c21 not found: ID does not exist" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.598524 4780 scope.go:117] "RemoveContainer" containerID="2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366" Dec 05 08:06:29 crc kubenswrapper[4780]: E1205 08:06:29.598974 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366\": container with ID starting with 2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366 not found: ID does not exist" containerID="2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.599011 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366"} err="failed to get container status \"2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366\": rpc error: code = NotFound desc = could not find container \"2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366\": container with ID starting with 2576c0bc6d916077a2e4ff54cf15e61ce882034aa1aa1f08830556244d219366 not found: ID does not exist" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.686816 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.686864 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.686910 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.687300 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.687376 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.687537 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.687607 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-config-data\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.687683 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.687717 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.687751 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2rnz\" (UniqueName: \"kubernetes.io/projected/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-kube-api-access-q2rnz\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.687778 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.748710 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.788849 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.788917 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.788948 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2rnz\" (UniqueName: \"kubernetes.io/projected/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-kube-api-access-q2rnz\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.788978 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.789019 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.789050 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.789079 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.789122 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.789165 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.789220 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.789249 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-config-data\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.789482 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.790054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-config-data\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.790137 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.790775 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.791763 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.794321 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.798422 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.799500 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.799542 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/429c0c2d255be0f0018e965f2e9ddbb3fa5ebf67a906fb1e9d33c6536d2e5029/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.801042 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.801903 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.815385 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2rnz\" (UniqueName: \"kubernetes.io/projected/5ccf6f95-5684-468d-a08e-dd0fa0e92c35-kube-api-access-q2rnz\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.848510 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba86fdca-b79e-4ffc-82be-91b7e8da5dd8\") pod \"rabbitmq-server-0\" (UID: \"5ccf6f95-5684-468d-a08e-dd0fa0e92c35\") " pod="openstack/rabbitmq-server-0" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.890518 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-erlang-cookie\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.890566 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-tls\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.890604 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-server-conf\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.890713 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.890778 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-plugins-conf\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.890798 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-config-data\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.890867 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c5eef2c-eb08-4610-8639-44aaa305bb28-erlang-cookie-secret\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.890921 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c5eef2c-eb08-4610-8639-44aaa305bb28-pod-info\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.890948 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-confd\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.890978 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-plugins\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.891012 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkrqc\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-kube-api-access-jkrqc\") pod \"6c5eef2c-eb08-4610-8639-44aaa305bb28\" (UID: \"6c5eef2c-eb08-4610-8639-44aaa305bb28\") " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.891070 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.891426 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.893208 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.893435 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.894669 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.895074 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-kube-api-access-jkrqc" (OuterVolumeSpecName: "kube-api-access-jkrqc") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "kube-api-access-jkrqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.895545 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c5eef2c-eb08-4610-8639-44aaa305bb28-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.895732 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6c5eef2c-eb08-4610-8639-44aaa305bb28-pod-info" (OuterVolumeSpecName: "pod-info") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.904574 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5" (OuterVolumeSpecName: "persistence") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.910246 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-config-data" (OuterVolumeSpecName: "config-data") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.934402 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-server-conf" (OuterVolumeSpecName: "server-conf") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.977360 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6c5eef2c-eb08-4610-8639-44aaa305bb28" (UID: "6c5eef2c-eb08-4610-8639-44aaa305bb28"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.992408 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.992437 4780 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.992471 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") on node \"crc\" " Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.992488 4780 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.992499 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c5eef2c-eb08-4610-8639-44aaa305bb28-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.992507 4780 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c5eef2c-eb08-4610-8639-44aaa305bb28-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.992515 4780 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c5eef2c-eb08-4610-8639-44aaa305bb28-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.992523 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.992573 4780 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c5eef2c-eb08-4610-8639-44aaa305bb28-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:29 crc kubenswrapper[4780]: I1205 08:06:29.992584 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkrqc\" (UniqueName: \"kubernetes.io/projected/6c5eef2c-eb08-4610-8639-44aaa305bb28-kube-api-access-jkrqc\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.008401 4780 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.008553 4780 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5") on node "crc" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.093505 4780 reconciler_common.go:293] "Volume detached for volume \"pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") on node \"crc\" DevicePath \"\"" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.143661 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.146812 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ac8b5d-6f67-4694-a962-a05961a7868c" path="/var/lib/kubelet/pods/a5ac8b5d-6f67-4694-a962-a05961a7868c/volumes" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.147386 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0d508c0-39e1-48e4-8e81-86ac0c4ee01b" path="/var/lib/kubelet/pods/d0d508c0-39e1-48e4-8e81-86ac0c4ee01b/volumes" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.362864 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.443442 4780 generic.go:334] "Generic (PLEG): container finished" podID="6c5eef2c-eb08-4610-8639-44aaa305bb28" containerID="2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5" exitCode=0 Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.443498 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.443530 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6c5eef2c-eb08-4610-8639-44aaa305bb28","Type":"ContainerDied","Data":"2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5"} Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.443582 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6c5eef2c-eb08-4610-8639-44aaa305bb28","Type":"ContainerDied","Data":"be12363ba27d1dcba5b72ca75dc1007aa72ac51f19d5270d57cc9e3637d4f4ca"} Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.443602 4780 scope.go:117] "RemoveContainer" containerID="2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.447528 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5ccf6f95-5684-468d-a08e-dd0fa0e92c35","Type":"ContainerStarted","Data":"db7f3806e2171bf7841d47c9c0d2a9a240fd8e9f3bf2ea71d40beeaa426a9fd1"} Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.473153 4780 scope.go:117] "RemoveContainer" containerID="598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.486397 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.491424 4780 scope.go:117] "RemoveContainer" containerID="2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5" Dec 05 08:06:30 crc kubenswrapper[4780]: E1205 08:06:30.494898 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5\": container with ID starting with 2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5 not found: ID does not exist" containerID="2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.494949 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5"} err="failed to get container status \"2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5\": rpc error: code = NotFound desc = could not find container \"2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5\": container with ID starting with 2cf9561a699b398239e744ff6b75501afbebdc9b04545ecccff11f7b2fbe3cf5 not found: ID does not exist" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.494982 4780 scope.go:117] "RemoveContainer" containerID="598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec" Dec 05 08:06:30 crc kubenswrapper[4780]: E1205 08:06:30.495818 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec\": container with ID starting with 598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec not found: ID does not exist" containerID="598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.496114 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec"} err="failed to get container status \"598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec\": rpc error: code = NotFound desc = could not find container \"598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec\": container with ID starting with 598c3a979295b1fe17f81ac7e1e65269c1c6d92003d25e550ec56f6b4f9107ec not found: ID does not exist" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.502798 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.508924 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 08:06:30 crc kubenswrapper[4780]: E1205 08:06:30.509501 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5eef2c-eb08-4610-8639-44aaa305bb28" containerName="setup-container" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.509661 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5eef2c-eb08-4610-8639-44aaa305bb28" containerName="setup-container" Dec 05 08:06:30 crc kubenswrapper[4780]: E1205 08:06:30.509756 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5eef2c-eb08-4610-8639-44aaa305bb28" containerName="rabbitmq" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.509813 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5eef2c-eb08-4610-8639-44aaa305bb28" containerName="rabbitmq" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.510008 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5eef2c-eb08-4610-8639-44aaa305bb28" containerName="rabbitmq" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.511210 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.517120 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.519494 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.519577 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-glftw" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.519701 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.519796 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.519850 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.519498 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.521952 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599139 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcc59214-4e52-4392-bf7d-240a70c0326b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599214 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599252 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcc59214-4e52-4392-bf7d-240a70c0326b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599310 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcc59214-4e52-4392-bf7d-240a70c0326b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599332 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599358 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599385 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599435 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htfwb\" (UniqueName: \"kubernetes.io/projected/bcc59214-4e52-4392-bf7d-240a70c0326b-kube-api-access-htfwb\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599470 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599531 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcc59214-4e52-4392-bf7d-240a70c0326b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.599569 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcc59214-4e52-4392-bf7d-240a70c0326b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.700777 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htfwb\" (UniqueName: \"kubernetes.io/projected/bcc59214-4e52-4392-bf7d-240a70c0326b-kube-api-access-htfwb\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701174 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701214 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcc59214-4e52-4392-bf7d-240a70c0326b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701264 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcc59214-4e52-4392-bf7d-240a70c0326b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701290 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcc59214-4e52-4392-bf7d-240a70c0326b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701331 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701370 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcc59214-4e52-4392-bf7d-240a70c0326b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701397 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcc59214-4e52-4392-bf7d-240a70c0326b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701422 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701448 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701478 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.701647 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.702295 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcc59214-4e52-4392-bf7d-240a70c0326b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.703110 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.703325 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcc59214-4e52-4392-bf7d-240a70c0326b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.703741 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcc59214-4e52-4392-bf7d-240a70c0326b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.704663 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.704711 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/09c34a5aa647612aadfbe1efa3ae06313191248567b753dc88465c3bc16a440e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.705646 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcc59214-4e52-4392-bf7d-240a70c0326b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.705751 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.705909 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcc59214-4e52-4392-bf7d-240a70c0326b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.707346 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcc59214-4e52-4392-bf7d-240a70c0326b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.717387 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htfwb\" (UniqueName: \"kubernetes.io/projected/bcc59214-4e52-4392-bf7d-240a70c0326b-kube-api-access-htfwb\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.729462 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2262d09a-0a7c-495b-8347-cebd09f72ba5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bcc59214-4e52-4392-bf7d-240a70c0326b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:30 crc kubenswrapper[4780]: I1205 08:06:30.861196 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:06:31 crc kubenswrapper[4780]: I1205 08:06:31.290724 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 08:06:31 crc kubenswrapper[4780]: I1205 08:06:31.459465 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5ccf6f95-5684-468d-a08e-dd0fa0e92c35","Type":"ContainerStarted","Data":"4b72a8a63dd5f6590e3ebb91b937f16e06cb459f1ef3fcae320deacc8a2b6bf3"} Dec 05 08:06:31 crc kubenswrapper[4780]: I1205 08:06:31.461187 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bcc59214-4e52-4392-bf7d-240a70c0326b","Type":"ContainerStarted","Data":"7dc5c1b8ee10a720a1e111dd0c3916a5ccc61b781b6b317e26aea38d626553ef"} Dec 05 08:06:32 crc kubenswrapper[4780]: I1205 08:06:32.147180 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5eef2c-eb08-4610-8639-44aaa305bb28" path="/var/lib/kubelet/pods/6c5eef2c-eb08-4610-8639-44aaa305bb28/volumes" Dec 05 08:06:33 crc kubenswrapper[4780]: I1205 08:06:33.480604 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bcc59214-4e52-4392-bf7d-240a70c0326b","Type":"ContainerStarted","Data":"170607a70aa63c9b13d9bae8935cf7e44afe29781268cc85a8fa38f2a04aa173"} Dec 05 08:06:39 crc kubenswrapper[4780]: I1205 08:06:39.138787 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:06:39 crc kubenswrapper[4780]: E1205 08:06:39.139767 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:06:51 crc kubenswrapper[4780]: I1205 08:06:51.138814 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:06:51 crc kubenswrapper[4780]: E1205 08:06:51.140118 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:07:04 crc kubenswrapper[4780]: I1205 08:07:04.745320 4780 generic.go:334] "Generic (PLEG): container finished" podID="5ccf6f95-5684-468d-a08e-dd0fa0e92c35" containerID="4b72a8a63dd5f6590e3ebb91b937f16e06cb459f1ef3fcae320deacc8a2b6bf3" exitCode=0 Dec 05 08:07:04 crc kubenswrapper[4780]: I1205 08:07:04.745420 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5ccf6f95-5684-468d-a08e-dd0fa0e92c35","Type":"ContainerDied","Data":"4b72a8a63dd5f6590e3ebb91b937f16e06cb459f1ef3fcae320deacc8a2b6bf3"} Dec 05 08:07:05 crc kubenswrapper[4780]: I1205 08:07:05.139006 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:07:05 crc kubenswrapper[4780]: I1205 08:07:05.755726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"3ebce31b918ac06eeeb2a54e3d0b7e2c0cbaa5123aab486853c1e76086b69647"} Dec 05 08:07:05 crc kubenswrapper[4780]: I1205 08:07:05.760038 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5ccf6f95-5684-468d-a08e-dd0fa0e92c35","Type":"ContainerStarted","Data":"2585e687c56e04d7e578101025356db516f2a96cbede592c2c98f032779eecfb"} Dec 05 08:07:05 crc kubenswrapper[4780]: I1205 08:07:05.760913 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 05 08:07:05 crc kubenswrapper[4780]: I1205 08:07:05.762830 4780 generic.go:334] "Generic (PLEG): container finished" podID="bcc59214-4e52-4392-bf7d-240a70c0326b" containerID="170607a70aa63c9b13d9bae8935cf7e44afe29781268cc85a8fa38f2a04aa173" exitCode=0 Dec 05 08:07:05 crc kubenswrapper[4780]: I1205 08:07:05.762856 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bcc59214-4e52-4392-bf7d-240a70c0326b","Type":"ContainerDied","Data":"170607a70aa63c9b13d9bae8935cf7e44afe29781268cc85a8fa38f2a04aa173"} Dec 05 08:07:05 crc kubenswrapper[4780]: I1205 08:07:05.812940 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.812845303 podStartE2EDuration="36.812845303s" podCreationTimestamp="2025-12-05 08:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:07:05.805827382 +0000 UTC m=+4859.875343734" watchObservedRunningTime="2025-12-05 08:07:05.812845303 +0000 UTC m=+4859.882361645" Dec 05 08:07:06 crc kubenswrapper[4780]: I1205 08:07:06.770577 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bcc59214-4e52-4392-bf7d-240a70c0326b","Type":"ContainerStarted","Data":"04d93693a37b8d872b8d125faece961c531f20982577209d58874a34b5cbe6e5"} Dec 05 08:07:06 crc kubenswrapper[4780]: I1205 08:07:06.771325 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:07:06 crc kubenswrapper[4780]: I1205 08:07:06.797599 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.797582574 podStartE2EDuration="36.797582574s" podCreationTimestamp="2025-12-05 08:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:07:06.791375085 +0000 UTC m=+4860.860891427" watchObservedRunningTime="2025-12-05 08:07:06.797582574 +0000 UTC m=+4860.867098906" Dec 05 08:07:20 crc kubenswrapper[4780]: I1205 08:07:20.150363 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 05 08:07:20 crc kubenswrapper[4780]: I1205 08:07:20.864160 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 05 08:07:24 crc kubenswrapper[4780]: I1205 08:07:24.781374 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Dec 05 08:07:24 crc kubenswrapper[4780]: I1205 08:07:24.784490 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 05 08:07:24 crc kubenswrapper[4780]: I1205 08:07:24.789038 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j5kk9" Dec 05 08:07:24 crc kubenswrapper[4780]: I1205 08:07:24.792698 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 05 08:07:24 crc kubenswrapper[4780]: I1205 08:07:24.878941 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrpb\" (UniqueName: \"kubernetes.io/projected/0a55d386-1cf0-4951-8dec-b2e0f679041c-kube-api-access-6rrpb\") pod \"mariadb-client-1-default\" (UID: \"0a55d386-1cf0-4951-8dec-b2e0f679041c\") " pod="openstack/mariadb-client-1-default" Dec 05 08:07:24 crc kubenswrapper[4780]: I1205 08:07:24.980530 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rrpb\" (UniqueName: \"kubernetes.io/projected/0a55d386-1cf0-4951-8dec-b2e0f679041c-kube-api-access-6rrpb\") pod \"mariadb-client-1-default\" (UID: \"0a55d386-1cf0-4951-8dec-b2e0f679041c\") " pod="openstack/mariadb-client-1-default" Dec 05 08:07:25 crc kubenswrapper[4780]: I1205 08:07:25.000970 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rrpb\" (UniqueName: \"kubernetes.io/projected/0a55d386-1cf0-4951-8dec-b2e0f679041c-kube-api-access-6rrpb\") pod \"mariadb-client-1-default\" (UID: \"0a55d386-1cf0-4951-8dec-b2e0f679041c\") " pod="openstack/mariadb-client-1-default" Dec 05 08:07:25 crc kubenswrapper[4780]: I1205 08:07:25.106279 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 05 08:07:25 crc kubenswrapper[4780]: I1205 08:07:25.569974 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 05 08:07:25 crc kubenswrapper[4780]: W1205 08:07:25.574676 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a55d386_1cf0_4951_8dec_b2e0f679041c.slice/crio-69745f2f94446470dbb0b4a710a0ad4873126c88cbc0f59e524c89e51bc474d0 WatchSource:0}: Error finding container 69745f2f94446470dbb0b4a710a0ad4873126c88cbc0f59e524c89e51bc474d0: Status 404 returned error can't find the container with id 69745f2f94446470dbb0b4a710a0ad4873126c88cbc0f59e524c89e51bc474d0 Dec 05 08:07:25 crc kubenswrapper[4780]: I1205 08:07:25.913534 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"0a55d386-1cf0-4951-8dec-b2e0f679041c","Type":"ContainerStarted","Data":"69745f2f94446470dbb0b4a710a0ad4873126c88cbc0f59e524c89e51bc474d0"} Dec 05 08:07:26 crc kubenswrapper[4780]: I1205 08:07:26.923128 4780 generic.go:334] "Generic (PLEG): container finished" podID="0a55d386-1cf0-4951-8dec-b2e0f679041c" containerID="200bdc8e239a3655a683f4ce62751ea11b5005aaacb3f00cff8972212af51cfd" exitCode=0 Dec 05 08:07:26 crc kubenswrapper[4780]: I1205 08:07:26.923184 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"0a55d386-1cf0-4951-8dec-b2e0f679041c","Type":"ContainerDied","Data":"200bdc8e239a3655a683f4ce62751ea11b5005aaacb3f00cff8972212af51cfd"} Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.335602 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.371967 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_0a55d386-1cf0-4951-8dec-b2e0f679041c/mariadb-client-1-default/0.log" Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.396575 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.402461 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.435525 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rrpb\" (UniqueName: \"kubernetes.io/projected/0a55d386-1cf0-4951-8dec-b2e0f679041c-kube-api-access-6rrpb\") pod \"0a55d386-1cf0-4951-8dec-b2e0f679041c\" (UID: \"0a55d386-1cf0-4951-8dec-b2e0f679041c\") " Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.441463 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a55d386-1cf0-4951-8dec-b2e0f679041c-kube-api-access-6rrpb" (OuterVolumeSpecName: "kube-api-access-6rrpb") pod "0a55d386-1cf0-4951-8dec-b2e0f679041c" (UID: "0a55d386-1cf0-4951-8dec-b2e0f679041c"). InnerVolumeSpecName "kube-api-access-6rrpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.537554 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rrpb\" (UniqueName: \"kubernetes.io/projected/0a55d386-1cf0-4951-8dec-b2e0f679041c-kube-api-access-6rrpb\") on node \"crc\" DevicePath \"\"" Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.833865 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Dec 05 08:07:28 crc kubenswrapper[4780]: E1205 08:07:28.834512 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a55d386-1cf0-4951-8dec-b2e0f679041c" containerName="mariadb-client-1-default" Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.834529 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a55d386-1cf0-4951-8dec-b2e0f679041c" containerName="mariadb-client-1-default" Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.834698 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a55d386-1cf0-4951-8dec-b2e0f679041c" containerName="mariadb-client-1-default" Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.835303 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.840774 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.943042 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx966\" (UniqueName: \"kubernetes.io/projected/49264c04-1a9e-439b-8e44-0dd46dd84061-kube-api-access-wx966\") pod \"mariadb-client-2-default\" (UID: \"49264c04-1a9e-439b-8e44-0dd46dd84061\") " pod="openstack/mariadb-client-2-default" Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.945689 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69745f2f94446470dbb0b4a710a0ad4873126c88cbc0f59e524c89e51bc474d0" Dec 05 08:07:28 crc kubenswrapper[4780]: I1205 08:07:28.945771 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 05 08:07:29 crc kubenswrapper[4780]: I1205 08:07:29.045111 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx966\" (UniqueName: \"kubernetes.io/projected/49264c04-1a9e-439b-8e44-0dd46dd84061-kube-api-access-wx966\") pod \"mariadb-client-2-default\" (UID: \"49264c04-1a9e-439b-8e44-0dd46dd84061\") " pod="openstack/mariadb-client-2-default" Dec 05 08:07:29 crc kubenswrapper[4780]: I1205 08:07:29.063310 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx966\" (UniqueName: \"kubernetes.io/projected/49264c04-1a9e-439b-8e44-0dd46dd84061-kube-api-access-wx966\") pod \"mariadb-client-2-default\" (UID: \"49264c04-1a9e-439b-8e44-0dd46dd84061\") " pod="openstack/mariadb-client-2-default" Dec 05 08:07:29 crc kubenswrapper[4780]: I1205 08:07:29.157063 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 05 08:07:29 crc kubenswrapper[4780]: W1205 08:07:29.668260 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49264c04_1a9e_439b_8e44_0dd46dd84061.slice/crio-28368f27debd391c80f12e131c9bfa404ebde8b2c1f0d9ad6cbed8ac3dccde57 WatchSource:0}: Error finding container 28368f27debd391c80f12e131c9bfa404ebde8b2c1f0d9ad6cbed8ac3dccde57: Status 404 returned error can't find the container with id 28368f27debd391c80f12e131c9bfa404ebde8b2c1f0d9ad6cbed8ac3dccde57 Dec 05 08:07:29 crc kubenswrapper[4780]: I1205 08:07:29.671199 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 05 08:07:29 crc kubenswrapper[4780]: I1205 08:07:29.955794 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"49264c04-1a9e-439b-8e44-0dd46dd84061","Type":"ContainerStarted","Data":"05da0023d35a05f0e3fd6826718e6ab5c788ab15739abdab188c9decacfd5642"} Dec 05 08:07:29 crc kubenswrapper[4780]: I1205 08:07:29.956171 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"49264c04-1a9e-439b-8e44-0dd46dd84061","Type":"ContainerStarted","Data":"28368f27debd391c80f12e131c9bfa404ebde8b2c1f0d9ad6cbed8ac3dccde57"} Dec 05 08:07:29 crc kubenswrapper[4780]: I1205 08:07:29.972835 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.972815043 podStartE2EDuration="1.972815043s" podCreationTimestamp="2025-12-05 08:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:07:29.971231589 +0000 UTC m=+4884.040747931" watchObservedRunningTime="2025-12-05 08:07:29.972815043 +0000 UTC m=+4884.042331375" Dec 05 08:07:30 crc kubenswrapper[4780]: I1205 08:07:30.149571 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a55d386-1cf0-4951-8dec-b2e0f679041c" path="/var/lib/kubelet/pods/0a55d386-1cf0-4951-8dec-b2e0f679041c/volumes" Dec 05 08:07:30 crc kubenswrapper[4780]: I1205 08:07:30.962807 4780 generic.go:334] "Generic (PLEG): container finished" podID="49264c04-1a9e-439b-8e44-0dd46dd84061" containerID="05da0023d35a05f0e3fd6826718e6ab5c788ab15739abdab188c9decacfd5642" exitCode=1 Dec 05 08:07:30 crc kubenswrapper[4780]: I1205 08:07:30.962854 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"49264c04-1a9e-439b-8e44-0dd46dd84061","Type":"ContainerDied","Data":"05da0023d35a05f0e3fd6826718e6ab5c788ab15739abdab188c9decacfd5642"} Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.335481 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.369455 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.374619 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.452630 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx966\" (UniqueName: \"kubernetes.io/projected/49264c04-1a9e-439b-8e44-0dd46dd84061-kube-api-access-wx966\") pod \"49264c04-1a9e-439b-8e44-0dd46dd84061\" (UID: \"49264c04-1a9e-439b-8e44-0dd46dd84061\") " Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.456872 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49264c04-1a9e-439b-8e44-0dd46dd84061-kube-api-access-wx966" (OuterVolumeSpecName: "kube-api-access-wx966") pod "49264c04-1a9e-439b-8e44-0dd46dd84061" (UID: "49264c04-1a9e-439b-8e44-0dd46dd84061"). InnerVolumeSpecName "kube-api-access-wx966". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.554081 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx966\" (UniqueName: \"kubernetes.io/projected/49264c04-1a9e-439b-8e44-0dd46dd84061-kube-api-access-wx966\") on node \"crc\" DevicePath \"\"" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.771547 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Dec 05 08:07:32 crc kubenswrapper[4780]: E1205 08:07:32.772325 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49264c04-1a9e-439b-8e44-0dd46dd84061" containerName="mariadb-client-2-default" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.772344 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="49264c04-1a9e-439b-8e44-0dd46dd84061" containerName="mariadb-client-2-default" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.772518 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="49264c04-1a9e-439b-8e44-0dd46dd84061" containerName="mariadb-client-2-default" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.773087 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.778413 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.858619 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gff4w\" (UniqueName: \"kubernetes.io/projected/6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7-kube-api-access-gff4w\") pod \"mariadb-client-1\" (UID: \"6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7\") " pod="openstack/mariadb-client-1" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.960031 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gff4w\" (UniqueName: \"kubernetes.io/projected/6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7-kube-api-access-gff4w\") pod \"mariadb-client-1\" (UID: \"6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7\") " pod="openstack/mariadb-client-1" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.978754 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28368f27debd391c80f12e131c9bfa404ebde8b2c1f0d9ad6cbed8ac3dccde57" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.978846 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 05 08:07:32 crc kubenswrapper[4780]: I1205 08:07:32.982701 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gff4w\" (UniqueName: \"kubernetes.io/projected/6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7-kube-api-access-gff4w\") pod \"mariadb-client-1\" (UID: \"6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7\") " pod="openstack/mariadb-client-1" Dec 05 08:07:33 crc kubenswrapper[4780]: I1205 08:07:33.091505 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 05 08:07:33 crc kubenswrapper[4780]: I1205 08:07:33.582012 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Dec 05 08:07:33 crc kubenswrapper[4780]: W1205 08:07:33.585673 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a93d08e_f7f8_4072_b7d4_8fdb8b7173b7.slice/crio-d6672872e7991b149019342f0ae1bf65f5b5838c99302dde5e15875cf1e99eb6 WatchSource:0}: Error finding container d6672872e7991b149019342f0ae1bf65f5b5838c99302dde5e15875cf1e99eb6: Status 404 returned error can't find the container with id d6672872e7991b149019342f0ae1bf65f5b5838c99302dde5e15875cf1e99eb6 Dec 05 08:07:33 crc kubenswrapper[4780]: I1205 08:07:33.991631 4780 generic.go:334] "Generic (PLEG): container finished" podID="6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7" containerID="b59ea3347f8365dc73da8c28514679d00faabb6aa675cc7c7bdb325c48320d6c" exitCode=0 Dec 05 08:07:33 crc kubenswrapper[4780]: I1205 08:07:33.991715 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7","Type":"ContainerDied","Data":"b59ea3347f8365dc73da8c28514679d00faabb6aa675cc7c7bdb325c48320d6c"} Dec 05 08:07:33 crc kubenswrapper[4780]: I1205 08:07:33.992057 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7","Type":"ContainerStarted","Data":"d6672872e7991b149019342f0ae1bf65f5b5838c99302dde5e15875cf1e99eb6"} Dec 05 08:07:34 crc kubenswrapper[4780]: I1205 08:07:34.156113 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49264c04-1a9e-439b-8e44-0dd46dd84061" path="/var/lib/kubelet/pods/49264c04-1a9e-439b-8e44-0dd46dd84061/volumes" Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.351214 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.408170 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7/mariadb-client-1/0.log" Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.480186 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.488061 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.509458 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gff4w\" (UniqueName: \"kubernetes.io/projected/6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7-kube-api-access-gff4w\") pod \"6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7\" (UID: \"6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7\") " Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.514936 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7-kube-api-access-gff4w" (OuterVolumeSpecName: "kube-api-access-gff4w") pod "6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7" (UID: "6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7"). InnerVolumeSpecName "kube-api-access-gff4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.611697 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gff4w\" (UniqueName: \"kubernetes.io/projected/6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7-kube-api-access-gff4w\") on node \"crc\" DevicePath \"\"" Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.910058 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Dec 05 08:07:35 crc kubenswrapper[4780]: E1205 08:07:35.910461 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7" containerName="mariadb-client-1" Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.910482 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7" containerName="mariadb-client-1" Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.910666 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7" containerName="mariadb-client-1" Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.911274 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.916295 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xm4q\" (UniqueName: \"kubernetes.io/projected/dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04-kube-api-access-9xm4q\") pod \"mariadb-client-4-default\" (UID: \"dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04\") " pod="openstack/mariadb-client-4-default" Dec 05 08:07:35 crc kubenswrapper[4780]: I1205 08:07:35.919730 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 05 08:07:36 crc kubenswrapper[4780]: I1205 08:07:36.008586 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6672872e7991b149019342f0ae1bf65f5b5838c99302dde5e15875cf1e99eb6" Dec 05 08:07:36 crc kubenswrapper[4780]: I1205 08:07:36.008922 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 05 08:07:36 crc kubenswrapper[4780]: I1205 08:07:36.016980 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xm4q\" (UniqueName: \"kubernetes.io/projected/dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04-kube-api-access-9xm4q\") pod \"mariadb-client-4-default\" (UID: \"dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04\") " pod="openstack/mariadb-client-4-default" Dec 05 08:07:36 crc kubenswrapper[4780]: I1205 08:07:36.037735 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xm4q\" (UniqueName: \"kubernetes.io/projected/dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04-kube-api-access-9xm4q\") pod \"mariadb-client-4-default\" (UID: \"dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04\") " pod="openstack/mariadb-client-4-default" Dec 05 08:07:36 crc kubenswrapper[4780]: I1205 08:07:36.150587 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7" path="/var/lib/kubelet/pods/6a93d08e-f7f8-4072-b7d4-8fdb8b7173b7/volumes" Dec 05 08:07:36 crc kubenswrapper[4780]: I1205 08:07:36.235407 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 05 08:07:36 crc kubenswrapper[4780]: I1205 08:07:36.512841 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 05 08:07:36 crc kubenswrapper[4780]: W1205 08:07:36.517265 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfb0ca7c_00d5_49b8_8e62_9f7bb5097f04.slice/crio-92bafd35a5abedc24618407b677d9f59e4833addec492a20e528e6eaf35efdd2 WatchSource:0}: Error finding container 92bafd35a5abedc24618407b677d9f59e4833addec492a20e528e6eaf35efdd2: Status 404 returned error can't find the container with id 92bafd35a5abedc24618407b677d9f59e4833addec492a20e528e6eaf35efdd2 Dec 05 08:07:37 crc kubenswrapper[4780]: I1205 08:07:37.019945 4780 generic.go:334] "Generic (PLEG): container finished" podID="dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04" containerID="f82acadf0bbdfc7229bbdb65cc4714410e1d087f63854d2968a8fa919853bd3c" exitCode=0 Dec 05 08:07:37 crc kubenswrapper[4780]: I1205 08:07:37.020013 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04","Type":"ContainerDied","Data":"f82acadf0bbdfc7229bbdb65cc4714410e1d087f63854d2968a8fa919853bd3c"} Dec 05 08:07:37 crc kubenswrapper[4780]: I1205 08:07:37.020244 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04","Type":"ContainerStarted","Data":"92bafd35a5abedc24618407b677d9f59e4833addec492a20e528e6eaf35efdd2"} Dec 05 08:07:39 crc kubenswrapper[4780]: I1205 08:07:39.233496 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 05 08:07:39 crc kubenswrapper[4780]: I1205 08:07:39.255540 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04/mariadb-client-4-default/0.log" Dec 05 08:07:39 crc kubenswrapper[4780]: I1205 08:07:39.280178 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 05 08:07:39 crc kubenswrapper[4780]: I1205 08:07:39.285986 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 05 08:07:39 crc kubenswrapper[4780]: I1205 08:07:39.368493 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xm4q\" (UniqueName: \"kubernetes.io/projected/dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04-kube-api-access-9xm4q\") pod \"dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04\" (UID: \"dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04\") " Dec 05 08:07:39 crc kubenswrapper[4780]: I1205 08:07:39.373215 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04-kube-api-access-9xm4q" (OuterVolumeSpecName: "kube-api-access-9xm4q") pod "dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04" (UID: "dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04"). InnerVolumeSpecName "kube-api-access-9xm4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:07:39 crc kubenswrapper[4780]: I1205 08:07:39.470929 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xm4q\" (UniqueName: \"kubernetes.io/projected/dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04-kube-api-access-9xm4q\") on node \"crc\" DevicePath \"\"" Dec 05 08:07:40 crc kubenswrapper[4780]: I1205 08:07:40.043686 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92bafd35a5abedc24618407b677d9f59e4833addec492a20e528e6eaf35efdd2" Dec 05 08:07:40 crc kubenswrapper[4780]: I1205 08:07:40.043755 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 05 08:07:40 crc kubenswrapper[4780]: I1205 08:07:40.149631 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04" path="/var/lib/kubelet/pods/dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04/volumes" Dec 05 08:07:42 crc kubenswrapper[4780]: I1205 08:07:42.423267 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Dec 05 08:07:42 crc kubenswrapper[4780]: E1205 08:07:42.423983 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04" containerName="mariadb-client-4-default" Dec 05 08:07:42 crc kubenswrapper[4780]: I1205 08:07:42.423996 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04" containerName="mariadb-client-4-default" Dec 05 08:07:42 crc kubenswrapper[4780]: I1205 08:07:42.424142 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb0ca7c-00d5-49b8-8e62-9f7bb5097f04" containerName="mariadb-client-4-default" Dec 05 08:07:42 crc kubenswrapper[4780]: I1205 08:07:42.424602 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 05 08:07:42 crc kubenswrapper[4780]: I1205 08:07:42.428897 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j5kk9" Dec 05 08:07:42 crc kubenswrapper[4780]: I1205 08:07:42.434826 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 05 08:07:42 crc kubenswrapper[4780]: I1205 08:07:42.620176 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6dvc\" (UniqueName: \"kubernetes.io/projected/7cab7fee-2b6d-4f39-9730-75e7a6d640de-kube-api-access-w6dvc\") pod \"mariadb-client-5-default\" (UID: \"7cab7fee-2b6d-4f39-9730-75e7a6d640de\") " pod="openstack/mariadb-client-5-default" Dec 05 08:07:42 crc kubenswrapper[4780]: I1205 08:07:42.722500 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6dvc\" (UniqueName: \"kubernetes.io/projected/7cab7fee-2b6d-4f39-9730-75e7a6d640de-kube-api-access-w6dvc\") pod \"mariadb-client-5-default\" (UID: \"7cab7fee-2b6d-4f39-9730-75e7a6d640de\") " pod="openstack/mariadb-client-5-default" Dec 05 08:07:42 crc kubenswrapper[4780]: I1205 08:07:42.744940 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6dvc\" (UniqueName: \"kubernetes.io/projected/7cab7fee-2b6d-4f39-9730-75e7a6d640de-kube-api-access-w6dvc\") pod \"mariadb-client-5-default\" (UID: \"7cab7fee-2b6d-4f39-9730-75e7a6d640de\") " pod="openstack/mariadb-client-5-default" Dec 05 08:07:42 crc kubenswrapper[4780]: I1205 08:07:42.745299 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 05 08:07:43 crc kubenswrapper[4780]: I1205 08:07:43.313969 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 05 08:07:44 crc kubenswrapper[4780]: I1205 08:07:44.081276 4780 generic.go:334] "Generic (PLEG): container finished" podID="7cab7fee-2b6d-4f39-9730-75e7a6d640de" containerID="cd154e6d0f9e3600405267c713cfa19dec474a962d6b8f2011ff782cc0e7af5f" exitCode=0 Dec 05 08:07:44 crc kubenswrapper[4780]: I1205 08:07:44.081330 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"7cab7fee-2b6d-4f39-9730-75e7a6d640de","Type":"ContainerDied","Data":"cd154e6d0f9e3600405267c713cfa19dec474a962d6b8f2011ff782cc0e7af5f"} Dec 05 08:07:44 crc kubenswrapper[4780]: I1205 08:07:44.081368 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"7cab7fee-2b6d-4f39-9730-75e7a6d640de","Type":"ContainerStarted","Data":"a1d15f988c69de1a45bc80280fbe17ab2f552933a928cb948fc05b41ea1ba19a"} Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.496457 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.518422 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_7cab7fee-2b6d-4f39-9730-75e7a6d640de/mariadb-client-5-default/0.log" Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.545203 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.550551 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.667657 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6dvc\" (UniqueName: \"kubernetes.io/projected/7cab7fee-2b6d-4f39-9730-75e7a6d640de-kube-api-access-w6dvc\") pod \"7cab7fee-2b6d-4f39-9730-75e7a6d640de\" (UID: \"7cab7fee-2b6d-4f39-9730-75e7a6d640de\") " Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.677493 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cab7fee-2b6d-4f39-9730-75e7a6d640de-kube-api-access-w6dvc" (OuterVolumeSpecName: "kube-api-access-w6dvc") pod "7cab7fee-2b6d-4f39-9730-75e7a6d640de" (UID: "7cab7fee-2b6d-4f39-9730-75e7a6d640de"). InnerVolumeSpecName "kube-api-access-w6dvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.688792 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Dec 05 08:07:45 crc kubenswrapper[4780]: E1205 08:07:45.689249 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cab7fee-2b6d-4f39-9730-75e7a6d640de" containerName="mariadb-client-5-default" Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.689265 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cab7fee-2b6d-4f39-9730-75e7a6d640de" containerName="mariadb-client-5-default" Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.689481 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cab7fee-2b6d-4f39-9730-75e7a6d640de" containerName="mariadb-client-5-default" Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.690182 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.701261 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.771076 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6dvc\" (UniqueName: \"kubernetes.io/projected/7cab7fee-2b6d-4f39-9730-75e7a6d640de-kube-api-access-w6dvc\") on node \"crc\" DevicePath \"\"" Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.872787 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z45x\" (UniqueName: \"kubernetes.io/projected/a1eb48ac-4209-4088-aaf9-a5720c46e8a6-kube-api-access-2z45x\") pod \"mariadb-client-6-default\" (UID: \"a1eb48ac-4209-4088-aaf9-a5720c46e8a6\") " pod="openstack/mariadb-client-6-default" Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.974942 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z45x\" (UniqueName: \"kubernetes.io/projected/a1eb48ac-4209-4088-aaf9-a5720c46e8a6-kube-api-access-2z45x\") pod \"mariadb-client-6-default\" (UID: \"a1eb48ac-4209-4088-aaf9-a5720c46e8a6\") " pod="openstack/mariadb-client-6-default" Dec 05 08:07:45 crc kubenswrapper[4780]: I1205 08:07:45.996793 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z45x\" (UniqueName: \"kubernetes.io/projected/a1eb48ac-4209-4088-aaf9-a5720c46e8a6-kube-api-access-2z45x\") pod \"mariadb-client-6-default\" (UID: \"a1eb48ac-4209-4088-aaf9-a5720c46e8a6\") " pod="openstack/mariadb-client-6-default" Dec 05 08:07:46 crc kubenswrapper[4780]: I1205 08:07:46.033598 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 05 08:07:46 crc kubenswrapper[4780]: I1205 08:07:46.099120 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1d15f988c69de1a45bc80280fbe17ab2f552933a928cb948fc05b41ea1ba19a" Dec 05 08:07:46 crc kubenswrapper[4780]: I1205 08:07:46.099184 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 05 08:07:46 crc kubenswrapper[4780]: I1205 08:07:46.153770 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cab7fee-2b6d-4f39-9730-75e7a6d640de" path="/var/lib/kubelet/pods/7cab7fee-2b6d-4f39-9730-75e7a6d640de/volumes" Dec 05 08:07:46 crc kubenswrapper[4780]: I1205 08:07:46.570447 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 05 08:07:46 crc kubenswrapper[4780]: W1205 08:07:46.575829 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1eb48ac_4209_4088_aaf9_a5720c46e8a6.slice/crio-fbf2879b20c5321a7a596118a4c78bf2f4724528ad3c494468ad5d061e546e22 WatchSource:0}: Error finding container fbf2879b20c5321a7a596118a4c78bf2f4724528ad3c494468ad5d061e546e22: Status 404 returned error can't find the container with id fbf2879b20c5321a7a596118a4c78bf2f4724528ad3c494468ad5d061e546e22 Dec 05 08:07:47 crc kubenswrapper[4780]: I1205 08:07:47.110602 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"a1eb48ac-4209-4088-aaf9-a5720c46e8a6","Type":"ContainerStarted","Data":"bf466e2cef3ecbe21c207503ba4752037fc77b754036f7c354e2a5eea6b16767"} Dec 05 08:07:47 crc kubenswrapper[4780]: I1205 08:07:47.110931 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"a1eb48ac-4209-4088-aaf9-a5720c46e8a6","Type":"ContainerStarted","Data":"fbf2879b20c5321a7a596118a4c78bf2f4724528ad3c494468ad5d061e546e22"} Dec 05 08:07:47 crc kubenswrapper[4780]: I1205 08:07:47.125826 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=2.125806294 podStartE2EDuration="2.125806294s" podCreationTimestamp="2025-12-05 08:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:07:47.122220777 +0000 UTC m=+4901.191737109" watchObservedRunningTime="2025-12-05 08:07:47.125806294 +0000 UTC m=+4901.195322626" Dec 05 08:07:47 crc kubenswrapper[4780]: I1205 08:07:47.179973 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-default_a1eb48ac-4209-4088-aaf9-a5720c46e8a6/mariadb-client-6-default/0.log" Dec 05 08:07:48 crc kubenswrapper[4780]: I1205 08:07:48.121914 4780 generic.go:334] "Generic (PLEG): container finished" podID="a1eb48ac-4209-4088-aaf9-a5720c46e8a6" containerID="bf466e2cef3ecbe21c207503ba4752037fc77b754036f7c354e2a5eea6b16767" exitCode=1 Dec 05 08:07:48 crc kubenswrapper[4780]: I1205 08:07:48.121959 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"a1eb48ac-4209-4088-aaf9-a5720c46e8a6","Type":"ContainerDied","Data":"bf466e2cef3ecbe21c207503ba4752037fc77b754036f7c354e2a5eea6b16767"} Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.490113 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.538352 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.543895 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.631281 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z45x\" (UniqueName: \"kubernetes.io/projected/a1eb48ac-4209-4088-aaf9-a5720c46e8a6-kube-api-access-2z45x\") pod \"a1eb48ac-4209-4088-aaf9-a5720c46e8a6\" (UID: \"a1eb48ac-4209-4088-aaf9-a5720c46e8a6\") " Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.636519 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1eb48ac-4209-4088-aaf9-a5720c46e8a6-kube-api-access-2z45x" (OuterVolumeSpecName: "kube-api-access-2z45x") pod "a1eb48ac-4209-4088-aaf9-a5720c46e8a6" (UID: "a1eb48ac-4209-4088-aaf9-a5720c46e8a6"). InnerVolumeSpecName "kube-api-access-2z45x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.679726 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Dec 05 08:07:49 crc kubenswrapper[4780]: E1205 08:07:49.680112 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1eb48ac-4209-4088-aaf9-a5720c46e8a6" containerName="mariadb-client-6-default" Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.680131 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1eb48ac-4209-4088-aaf9-a5720c46e8a6" containerName="mariadb-client-6-default" Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.680283 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1eb48ac-4209-4088-aaf9-a5720c46e8a6" containerName="mariadb-client-6-default" Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.680816 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.688711 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.733487 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z45x\" (UniqueName: \"kubernetes.io/projected/a1eb48ac-4209-4088-aaf9-a5720c46e8a6-kube-api-access-2z45x\") on node \"crc\" DevicePath \"\"" Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.834922 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnkxn\" (UniqueName: \"kubernetes.io/projected/4cfb0be5-60c6-4971-9f9b-c3a7d73eee17-kube-api-access-hnkxn\") pod \"mariadb-client-7-default\" (UID: \"4cfb0be5-60c6-4971-9f9b-c3a7d73eee17\") " pod="openstack/mariadb-client-7-default" Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.936958 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnkxn\" (UniqueName: \"kubernetes.io/projected/4cfb0be5-60c6-4971-9f9b-c3a7d73eee17-kube-api-access-hnkxn\") pod \"mariadb-client-7-default\" (UID: \"4cfb0be5-60c6-4971-9f9b-c3a7d73eee17\") " pod="openstack/mariadb-client-7-default" Dec 05 08:07:49 crc kubenswrapper[4780]: I1205 08:07:49.956737 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnkxn\" (UniqueName: \"kubernetes.io/projected/4cfb0be5-60c6-4971-9f9b-c3a7d73eee17-kube-api-access-hnkxn\") pod \"mariadb-client-7-default\" (UID: \"4cfb0be5-60c6-4971-9f9b-c3a7d73eee17\") " pod="openstack/mariadb-client-7-default" Dec 05 08:07:50 crc kubenswrapper[4780]: I1205 08:07:50.002370 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 05 08:07:50 crc kubenswrapper[4780]: I1205 08:07:50.142899 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 05 08:07:50 crc kubenswrapper[4780]: I1205 08:07:50.149293 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1eb48ac-4209-4088-aaf9-a5720c46e8a6" path="/var/lib/kubelet/pods/a1eb48ac-4209-4088-aaf9-a5720c46e8a6/volumes" Dec 05 08:07:50 crc kubenswrapper[4780]: I1205 08:07:50.151219 4780 scope.go:117] "RemoveContainer" containerID="bf466e2cef3ecbe21c207503ba4752037fc77b754036f7c354e2a5eea6b16767" Dec 05 08:07:50 crc kubenswrapper[4780]: I1205 08:07:50.472192 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 05 08:07:51 crc kubenswrapper[4780]: I1205 08:07:51.153915 4780 generic.go:334] "Generic (PLEG): container finished" podID="4cfb0be5-60c6-4971-9f9b-c3a7d73eee17" containerID="f2e313819c3e4c93a33ffefe562bf70fe54ed7edd68fd7599fba93ea53e02b86" exitCode=0 Dec 05 08:07:51 crc kubenswrapper[4780]: I1205 08:07:51.153956 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"4cfb0be5-60c6-4971-9f9b-c3a7d73eee17","Type":"ContainerDied","Data":"f2e313819c3e4c93a33ffefe562bf70fe54ed7edd68fd7599fba93ea53e02b86"} Dec 05 08:07:51 crc kubenswrapper[4780]: I1205 08:07:51.153978 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"4cfb0be5-60c6-4971-9f9b-c3a7d73eee17","Type":"ContainerStarted","Data":"0c4ee437fdf10d7c4e7bdbe6003b8d4c22e5a64b1826fa96e97e01315ba3f492"} Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.459774 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.482929 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_4cfb0be5-60c6-4971-9f9b-c3a7d73eee17/mariadb-client-7-default/0.log" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.512393 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.517278 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.577332 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnkxn\" (UniqueName: \"kubernetes.io/projected/4cfb0be5-60c6-4971-9f9b-c3a7d73eee17-kube-api-access-hnkxn\") pod \"4cfb0be5-60c6-4971-9f9b-c3a7d73eee17\" (UID: \"4cfb0be5-60c6-4971-9f9b-c3a7d73eee17\") " Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.582496 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cfb0be5-60c6-4971-9f9b-c3a7d73eee17-kube-api-access-hnkxn" (OuterVolumeSpecName: "kube-api-access-hnkxn") pod "4cfb0be5-60c6-4971-9f9b-c3a7d73eee17" (UID: "4cfb0be5-60c6-4971-9f9b-c3a7d73eee17"). InnerVolumeSpecName "kube-api-access-hnkxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.630786 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Dec 05 08:07:52 crc kubenswrapper[4780]: E1205 08:07:52.631164 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfb0be5-60c6-4971-9f9b-c3a7d73eee17" containerName="mariadb-client-7-default" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.631181 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfb0be5-60c6-4971-9f9b-c3a7d73eee17" containerName="mariadb-client-7-default" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.631320 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cfb0be5-60c6-4971-9f9b-c3a7d73eee17" containerName="mariadb-client-7-default" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.631835 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.636522 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.679225 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppjcq\" (UniqueName: \"kubernetes.io/projected/9b9a8bdb-4a45-4e3b-a229-a04b1681148a-kube-api-access-ppjcq\") pod \"mariadb-client-2\" (UID: \"9b9a8bdb-4a45-4e3b-a229-a04b1681148a\") " pod="openstack/mariadb-client-2" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.679527 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnkxn\" (UniqueName: \"kubernetes.io/projected/4cfb0be5-60c6-4971-9f9b-c3a7d73eee17-kube-api-access-hnkxn\") on node \"crc\" DevicePath \"\"" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.781071 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppjcq\" (UniqueName: \"kubernetes.io/projected/9b9a8bdb-4a45-4e3b-a229-a04b1681148a-kube-api-access-ppjcq\") pod \"mariadb-client-2\" (UID: \"9b9a8bdb-4a45-4e3b-a229-a04b1681148a\") " pod="openstack/mariadb-client-2" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.813751 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppjcq\" (UniqueName: \"kubernetes.io/projected/9b9a8bdb-4a45-4e3b-a229-a04b1681148a-kube-api-access-ppjcq\") pod \"mariadb-client-2\" (UID: \"9b9a8bdb-4a45-4e3b-a229-a04b1681148a\") " pod="openstack/mariadb-client-2" Dec 05 08:07:52 crc kubenswrapper[4780]: I1205 08:07:52.954571 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 05 08:07:53 crc kubenswrapper[4780]: I1205 08:07:53.171492 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c4ee437fdf10d7c4e7bdbe6003b8d4c22e5a64b1826fa96e97e01315ba3f492" Dec 05 08:07:53 crc kubenswrapper[4780]: I1205 08:07:53.171551 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 05 08:07:53 crc kubenswrapper[4780]: I1205 08:07:53.436891 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Dec 05 08:07:53 crc kubenswrapper[4780]: W1205 08:07:53.440579 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9a8bdb_4a45_4e3b_a229_a04b1681148a.slice/crio-8649590ad50c9d737d1d1e680ae158cd3b9c7e54538e3b3e601a193f6c6c5de4 WatchSource:0}: Error finding container 8649590ad50c9d737d1d1e680ae158cd3b9c7e54538e3b3e601a193f6c6c5de4: Status 404 returned error can't find the container with id 8649590ad50c9d737d1d1e680ae158cd3b9c7e54538e3b3e601a193f6c6c5de4 Dec 05 08:07:54 crc kubenswrapper[4780]: I1205 08:07:54.162627 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cfb0be5-60c6-4971-9f9b-c3a7d73eee17" path="/var/lib/kubelet/pods/4cfb0be5-60c6-4971-9f9b-c3a7d73eee17/volumes" Dec 05 08:07:54 crc kubenswrapper[4780]: I1205 08:07:54.183444 4780 generic.go:334] "Generic (PLEG): container finished" podID="9b9a8bdb-4a45-4e3b-a229-a04b1681148a" containerID="e7a444c6fc5c05752ab59bccc78301df4ba4eff5232e6b999face2f2b917bc7b" exitCode=0 Dec 05 08:07:54 crc kubenswrapper[4780]: I1205 08:07:54.183507 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"9b9a8bdb-4a45-4e3b-a229-a04b1681148a","Type":"ContainerDied","Data":"e7a444c6fc5c05752ab59bccc78301df4ba4eff5232e6b999face2f2b917bc7b"} Dec 05 08:07:54 crc kubenswrapper[4780]: I1205 08:07:54.183601 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"9b9a8bdb-4a45-4e3b-a229-a04b1681148a","Type":"ContainerStarted","Data":"8649590ad50c9d737d1d1e680ae158cd3b9c7e54538e3b3e601a193f6c6c5de4"} Dec 05 08:07:55 crc kubenswrapper[4780]: I1205 08:07:55.585953 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 05 08:07:55 crc kubenswrapper[4780]: I1205 08:07:55.603736 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_9b9a8bdb-4a45-4e3b-a229-a04b1681148a/mariadb-client-2/0.log" Dec 05 08:07:55 crc kubenswrapper[4780]: I1205 08:07:55.627935 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Dec 05 08:07:55 crc kubenswrapper[4780]: I1205 08:07:55.635108 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Dec 05 08:07:55 crc kubenswrapper[4780]: I1205 08:07:55.722521 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppjcq\" (UniqueName: \"kubernetes.io/projected/9b9a8bdb-4a45-4e3b-a229-a04b1681148a-kube-api-access-ppjcq\") pod \"9b9a8bdb-4a45-4e3b-a229-a04b1681148a\" (UID: \"9b9a8bdb-4a45-4e3b-a229-a04b1681148a\") " Dec 05 08:07:55 crc kubenswrapper[4780]: I1205 08:07:55.728069 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9a8bdb-4a45-4e3b-a229-a04b1681148a-kube-api-access-ppjcq" (OuterVolumeSpecName: "kube-api-access-ppjcq") pod "9b9a8bdb-4a45-4e3b-a229-a04b1681148a" (UID: "9b9a8bdb-4a45-4e3b-a229-a04b1681148a"). InnerVolumeSpecName "kube-api-access-ppjcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:07:55 crc kubenswrapper[4780]: I1205 08:07:55.824516 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppjcq\" (UniqueName: \"kubernetes.io/projected/9b9a8bdb-4a45-4e3b-a229-a04b1681148a-kube-api-access-ppjcq\") on node \"crc\" DevicePath \"\"" Dec 05 08:07:56 crc kubenswrapper[4780]: I1205 08:07:56.154293 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9a8bdb-4a45-4e3b-a229-a04b1681148a" path="/var/lib/kubelet/pods/9b9a8bdb-4a45-4e3b-a229-a04b1681148a/volumes" Dec 05 08:07:56 crc kubenswrapper[4780]: I1205 08:07:56.202909 4780 scope.go:117] "RemoveContainer" containerID="e7a444c6fc5c05752ab59bccc78301df4ba4eff5232e6b999face2f2b917bc7b" Dec 05 08:07:56 crc kubenswrapper[4780]: I1205 08:07:56.203009 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 05 08:09:24 crc kubenswrapper[4780]: I1205 08:09:24.903077 4780 scope.go:117] "RemoveContainer" containerID="76868575cea8b1fccbea16fb4257c4be118d069f51364f9ac3fac9a71b7b6241" Dec 05 08:09:29 crc kubenswrapper[4780]: I1205 08:09:29.907942 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:09:29 crc kubenswrapper[4780]: I1205 08:09:29.908400 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:09:59 crc kubenswrapper[4780]: I1205 08:09:59.907979 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:09:59 crc kubenswrapper[4780]: I1205 08:09:59.908612 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.266424 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-58fjv"] Dec 05 08:10:02 crc kubenswrapper[4780]: E1205 08:10:02.268897 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9a8bdb-4a45-4e3b-a229-a04b1681148a" containerName="mariadb-client-2" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.268920 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9a8bdb-4a45-4e3b-a229-a04b1681148a" containerName="mariadb-client-2" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.269104 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9a8bdb-4a45-4e3b-a229-a04b1681148a" containerName="mariadb-client-2" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.270396 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.284812 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58fjv"] Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.369200 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-catalog-content\") pod \"redhat-operators-58fjv\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.369260 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-utilities\") pod \"redhat-operators-58fjv\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.369490 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd25m\" (UniqueName: \"kubernetes.io/projected/764e039b-81cc-4f44-b9c2-c335eab64753-kube-api-access-pd25m\") pod \"redhat-operators-58fjv\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.470646 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd25m\" (UniqueName: \"kubernetes.io/projected/764e039b-81cc-4f44-b9c2-c335eab64753-kube-api-access-pd25m\") pod \"redhat-operators-58fjv\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.470737 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-catalog-content\") pod \"redhat-operators-58fjv\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.470772 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-utilities\") pod \"redhat-operators-58fjv\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.471295 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-catalog-content\") pod \"redhat-operators-58fjv\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.471337 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-utilities\") pod \"redhat-operators-58fjv\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.497663 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd25m\" (UniqueName: \"kubernetes.io/projected/764e039b-81cc-4f44-b9c2-c335eab64753-kube-api-access-pd25m\") pod \"redhat-operators-58fjv\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:02 crc kubenswrapper[4780]: I1205 08:10:02.587938 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:03 crc kubenswrapper[4780]: I1205 08:10:03.076556 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58fjv"] Dec 05 08:10:03 crc kubenswrapper[4780]: I1205 08:10:03.226148 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58fjv" event={"ID":"764e039b-81cc-4f44-b9c2-c335eab64753","Type":"ContainerStarted","Data":"1b8a6c1245685e4911834f28ac25b4f9d982ecc9a31e928b2c62ccf3f9e9dd97"} Dec 05 08:10:04 crc kubenswrapper[4780]: I1205 08:10:04.234351 4780 generic.go:334] "Generic (PLEG): container finished" podID="764e039b-81cc-4f44-b9c2-c335eab64753" containerID="ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7" exitCode=0 Dec 05 08:10:04 crc kubenswrapper[4780]: I1205 08:10:04.234407 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58fjv" event={"ID":"764e039b-81cc-4f44-b9c2-c335eab64753","Type":"ContainerDied","Data":"ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7"} Dec 05 08:10:04 crc kubenswrapper[4780]: I1205 08:10:04.236023 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 08:10:05 crc kubenswrapper[4780]: I1205 08:10:05.243394 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58fjv" event={"ID":"764e039b-81cc-4f44-b9c2-c335eab64753","Type":"ContainerStarted","Data":"eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7"} Dec 05 08:10:06 crc kubenswrapper[4780]: I1205 08:10:06.252665 4780 generic.go:334] "Generic (PLEG): container finished" podID="764e039b-81cc-4f44-b9c2-c335eab64753" containerID="eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7" exitCode=0 Dec 05 08:10:06 crc kubenswrapper[4780]: I1205 08:10:06.252739 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58fjv" event={"ID":"764e039b-81cc-4f44-b9c2-c335eab64753","Type":"ContainerDied","Data":"eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7"} Dec 05 08:10:07 crc kubenswrapper[4780]: I1205 08:10:07.262980 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58fjv" event={"ID":"764e039b-81cc-4f44-b9c2-c335eab64753","Type":"ContainerStarted","Data":"f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5"} Dec 05 08:10:07 crc kubenswrapper[4780]: I1205 08:10:07.279087 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-58fjv" podStartSLOduration=2.849602786 podStartE2EDuration="5.279069186s" podCreationTimestamp="2025-12-05 08:10:02 +0000 UTC" firstStartedPulling="2025-12-05 08:10:04.235730221 +0000 UTC m=+5038.305246553" lastFinishedPulling="2025-12-05 08:10:06.665196621 +0000 UTC m=+5040.734712953" observedRunningTime="2025-12-05 08:10:07.276854626 +0000 UTC m=+5041.346370968" watchObservedRunningTime="2025-12-05 08:10:07.279069186 +0000 UTC m=+5041.348585518" Dec 05 08:10:12 crc kubenswrapper[4780]: I1205 08:10:12.588136 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:12 crc kubenswrapper[4780]: I1205 08:10:12.588618 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:12 crc kubenswrapper[4780]: I1205 08:10:12.629777 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:13 crc kubenswrapper[4780]: I1205 08:10:13.347432 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:13 crc kubenswrapper[4780]: I1205 08:10:13.393419 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58fjv"] Dec 05 08:10:15 crc kubenswrapper[4780]: I1205 08:10:15.319990 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-58fjv" podUID="764e039b-81cc-4f44-b9c2-c335eab64753" containerName="registry-server" containerID="cri-o://f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5" gracePeriod=2 Dec 05 08:10:15 crc kubenswrapper[4780]: I1205 08:10:15.730405 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:15 crc kubenswrapper[4780]: I1205 08:10:15.879677 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-utilities\") pod \"764e039b-81cc-4f44-b9c2-c335eab64753\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " Dec 05 08:10:15 crc kubenswrapper[4780]: I1205 08:10:15.879748 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd25m\" (UniqueName: \"kubernetes.io/projected/764e039b-81cc-4f44-b9c2-c335eab64753-kube-api-access-pd25m\") pod \"764e039b-81cc-4f44-b9c2-c335eab64753\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " Dec 05 08:10:15 crc kubenswrapper[4780]: I1205 08:10:15.879910 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-catalog-content\") pod \"764e039b-81cc-4f44-b9c2-c335eab64753\" (UID: \"764e039b-81cc-4f44-b9c2-c335eab64753\") " Dec 05 08:10:15 crc kubenswrapper[4780]: I1205 08:10:15.881045 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-utilities" (OuterVolumeSpecName: "utilities") pod "764e039b-81cc-4f44-b9c2-c335eab64753" (UID: "764e039b-81cc-4f44-b9c2-c335eab64753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:10:15 crc kubenswrapper[4780]: I1205 08:10:15.883646 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:10:15 crc kubenswrapper[4780]: I1205 08:10:15.885358 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764e039b-81cc-4f44-b9c2-c335eab64753-kube-api-access-pd25m" (OuterVolumeSpecName: "kube-api-access-pd25m") pod "764e039b-81cc-4f44-b9c2-c335eab64753" (UID: "764e039b-81cc-4f44-b9c2-c335eab64753"). InnerVolumeSpecName "kube-api-access-pd25m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:10:15 crc kubenswrapper[4780]: I1205 08:10:15.985214 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd25m\" (UniqueName: \"kubernetes.io/projected/764e039b-81cc-4f44-b9c2-c335eab64753-kube-api-access-pd25m\") on node \"crc\" DevicePath \"\"" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.330778 4780 generic.go:334] "Generic (PLEG): container finished" podID="764e039b-81cc-4f44-b9c2-c335eab64753" containerID="f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5" exitCode=0 Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.330849 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58fjv" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.330870 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58fjv" event={"ID":"764e039b-81cc-4f44-b9c2-c335eab64753","Type":"ContainerDied","Data":"f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5"} Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.331295 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58fjv" event={"ID":"764e039b-81cc-4f44-b9c2-c335eab64753","Type":"ContainerDied","Data":"1b8a6c1245685e4911834f28ac25b4f9d982ecc9a31e928b2c62ccf3f9e9dd97"} Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.331314 4780 scope.go:117] "RemoveContainer" containerID="f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.347574 4780 scope.go:117] "RemoveContainer" containerID="eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.363256 4780 scope.go:117] "RemoveContainer" containerID="ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.392559 4780 scope.go:117] "RemoveContainer" containerID="f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5" Dec 05 08:10:16 crc kubenswrapper[4780]: E1205 08:10:16.393000 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5\": container with ID starting with f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5 not found: ID does not exist" containerID="f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.393048 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5"} err="failed to get container status \"f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5\": rpc error: code = NotFound desc = could not find container \"f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5\": container with ID starting with f011da68014966ca496b1f31008125eb72e2ab68c800e02178652a1c9af302b5 not found: ID does not exist" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.393069 4780 scope.go:117] "RemoveContainer" containerID="eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7" Dec 05 08:10:16 crc kubenswrapper[4780]: E1205 08:10:16.393298 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7\": container with ID starting with eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7 not found: ID does not exist" containerID="eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.393320 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7"} err="failed to get container status \"eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7\": rpc error: code = NotFound desc = could not find container \"eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7\": container with ID starting with eabb0d216688e2e15ca60c78e159063e315b38662efd2eaf7a97d1e3957d4bf7 not found: ID does not exist" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.393334 4780 scope.go:117] "RemoveContainer" containerID="ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7" Dec 05 08:10:16 crc kubenswrapper[4780]: E1205 08:10:16.393720 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7\": container with ID starting with ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7 not found: ID does not exist" containerID="ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.393739 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7"} err="failed to get container status \"ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7\": rpc error: code = NotFound desc = could not find container \"ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7\": container with ID starting with ab378532d4f52831e64f5658967894ff90b9ecf981a91d968609c355a216c1b7 not found: ID does not exist" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.917601 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "764e039b-81cc-4f44-b9c2-c335eab64753" (UID: "764e039b-81cc-4f44-b9c2-c335eab64753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.970600 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58fjv"] Dec 05 08:10:16 crc kubenswrapper[4780]: I1205 08:10:16.978147 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-58fjv"] Dec 05 08:10:17 crc kubenswrapper[4780]: I1205 08:10:17.000297 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/764e039b-81cc-4f44-b9c2-c335eab64753-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:10:18 crc kubenswrapper[4780]: I1205 08:10:18.147195 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="764e039b-81cc-4f44-b9c2-c335eab64753" path="/var/lib/kubelet/pods/764e039b-81cc-4f44-b9c2-c335eab64753/volumes" Dec 05 08:10:29 crc kubenswrapper[4780]: I1205 08:10:29.908484 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:10:29 crc kubenswrapper[4780]: I1205 08:10:29.909031 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:10:29 crc kubenswrapper[4780]: I1205 08:10:29.909069 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 08:10:29 crc kubenswrapper[4780]: I1205 08:10:29.909636 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ebce31b918ac06eeeb2a54e3d0b7e2c0cbaa5123aab486853c1e76086b69647"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 08:10:29 crc kubenswrapper[4780]: I1205 08:10:29.909693 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://3ebce31b918ac06eeeb2a54e3d0b7e2c0cbaa5123aab486853c1e76086b69647" gracePeriod=600 Dec 05 08:10:31 crc kubenswrapper[4780]: I1205 08:10:31.432630 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="3ebce31b918ac06eeeb2a54e3d0b7e2c0cbaa5123aab486853c1e76086b69647" exitCode=0 Dec 05 08:10:31 crc kubenswrapper[4780]: I1205 08:10:31.432712 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"3ebce31b918ac06eeeb2a54e3d0b7e2c0cbaa5123aab486853c1e76086b69647"} Dec 05 08:10:31 crc kubenswrapper[4780]: I1205 08:10:31.433192 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368"} Dec 05 08:10:31 crc kubenswrapper[4780]: I1205 08:10:31.433216 4780 scope.go:117] "RemoveContainer" containerID="e264612b0b9d28004173bfdc611ec6c323572e25ea1ea0f52c001304885d5c5b" Dec 05 08:10:33 crc kubenswrapper[4780]: I1205 08:10:33.817504 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Dec 05 08:10:33 crc kubenswrapper[4780]: E1205 08:10:33.818226 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764e039b-81cc-4f44-b9c2-c335eab64753" containerName="registry-server" Dec 05 08:10:33 crc kubenswrapper[4780]: I1205 08:10:33.818252 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="764e039b-81cc-4f44-b9c2-c335eab64753" containerName="registry-server" Dec 05 08:10:33 crc kubenswrapper[4780]: E1205 08:10:33.818274 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764e039b-81cc-4f44-b9c2-c335eab64753" containerName="extract-content" Dec 05 08:10:33 crc kubenswrapper[4780]: I1205 08:10:33.818283 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="764e039b-81cc-4f44-b9c2-c335eab64753" containerName="extract-content" Dec 05 08:10:33 crc kubenswrapper[4780]: E1205 08:10:33.818317 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764e039b-81cc-4f44-b9c2-c335eab64753" containerName="extract-utilities" Dec 05 08:10:33 crc kubenswrapper[4780]: I1205 08:10:33.818327 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="764e039b-81cc-4f44-b9c2-c335eab64753" containerName="extract-utilities" Dec 05 08:10:33 crc kubenswrapper[4780]: I1205 08:10:33.818512 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="764e039b-81cc-4f44-b9c2-c335eab64753" containerName="registry-server" Dec 05 08:10:33 crc kubenswrapper[4780]: I1205 08:10:33.819391 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 05 08:10:33 crc kubenswrapper[4780]: I1205 08:10:33.824313 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j5kk9" Dec 05 08:10:33 crc kubenswrapper[4780]: I1205 08:10:33.830839 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Dec 05 08:10:33 crc kubenswrapper[4780]: I1205 08:10:33.946560 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q44mj\" (UniqueName: \"kubernetes.io/projected/1049d285-fbfa-474c-9e0a-dd1fa5f7eca3-kube-api-access-q44mj\") pod \"mariadb-copy-data\" (UID: \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\") " pod="openstack/mariadb-copy-data" Dec 05 08:10:33 crc kubenswrapper[4780]: I1205 08:10:33.946643 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-320e09c3-3525-4433-b297-486572f74f95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-320e09c3-3525-4433-b297-486572f74f95\") pod \"mariadb-copy-data\" (UID: \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\") " pod="openstack/mariadb-copy-data" Dec 05 08:10:34 crc kubenswrapper[4780]: I1205 08:10:34.048564 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q44mj\" (UniqueName: \"kubernetes.io/projected/1049d285-fbfa-474c-9e0a-dd1fa5f7eca3-kube-api-access-q44mj\") pod \"mariadb-copy-data\" (UID: \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\") " pod="openstack/mariadb-copy-data" Dec 05 08:10:34 crc kubenswrapper[4780]: I1205 08:10:34.048615 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-320e09c3-3525-4433-b297-486572f74f95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-320e09c3-3525-4433-b297-486572f74f95\") pod \"mariadb-copy-data\" (UID: \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\") " pod="openstack/mariadb-copy-data" Dec 05 08:10:34 crc kubenswrapper[4780]: I1205 08:10:34.051645 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:10:34 crc kubenswrapper[4780]: I1205 08:10:34.051699 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-320e09c3-3525-4433-b297-486572f74f95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-320e09c3-3525-4433-b297-486572f74f95\") pod \"mariadb-copy-data\" (UID: \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e838f8d6395e9f02d23abb1a4f2afd4c01b5a0b2d306c3220a3345f07ec538e/globalmount\"" pod="openstack/mariadb-copy-data" Dec 05 08:10:34 crc kubenswrapper[4780]: I1205 08:10:34.073109 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q44mj\" (UniqueName: \"kubernetes.io/projected/1049d285-fbfa-474c-9e0a-dd1fa5f7eca3-kube-api-access-q44mj\") pod \"mariadb-copy-data\" (UID: \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\") " pod="openstack/mariadb-copy-data" Dec 05 08:10:34 crc kubenswrapper[4780]: I1205 08:10:34.080782 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-320e09c3-3525-4433-b297-486572f74f95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-320e09c3-3525-4433-b297-486572f74f95\") pod \"mariadb-copy-data\" (UID: \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\") " pod="openstack/mariadb-copy-data" Dec 05 08:10:34 crc kubenswrapper[4780]: I1205 08:10:34.138137 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 05 08:10:34 crc kubenswrapper[4780]: I1205 08:10:34.663656 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Dec 05 08:10:35 crc kubenswrapper[4780]: I1205 08:10:35.470335 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3","Type":"ContainerStarted","Data":"873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c"} Dec 05 08:10:35 crc kubenswrapper[4780]: I1205 08:10:35.471754 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3","Type":"ContainerStarted","Data":"8ece86ac341f0b738ac36e7319f38bc1af0e1112099ff92205602a045262b0aa"} Dec 05 08:10:35 crc kubenswrapper[4780]: I1205 08:10:35.485024 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.485007546 podStartE2EDuration="3.485007546s" podCreationTimestamp="2025-12-05 08:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:10:35.482836117 +0000 UTC m=+5069.552352469" watchObservedRunningTime="2025-12-05 08:10:35.485007546 +0000 UTC m=+5069.554523878" Dec 05 08:10:38 crc kubenswrapper[4780]: I1205 08:10:38.233155 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Dec 05 08:10:38 crc kubenswrapper[4780]: I1205 08:10:38.234648 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 05 08:10:38 crc kubenswrapper[4780]: I1205 08:10:38.240465 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 05 08:10:38 crc kubenswrapper[4780]: I1205 08:10:38.311595 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drf5\" (UniqueName: \"kubernetes.io/projected/fff69db0-b1fb-4cfe-aa39-a249e2f68977-kube-api-access-8drf5\") pod \"mariadb-client\" (UID: \"fff69db0-b1fb-4cfe-aa39-a249e2f68977\") " pod="openstack/mariadb-client" Dec 05 08:10:38 crc kubenswrapper[4780]: I1205 08:10:38.414535 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drf5\" (UniqueName: \"kubernetes.io/projected/fff69db0-b1fb-4cfe-aa39-a249e2f68977-kube-api-access-8drf5\") pod \"mariadb-client\" (UID: \"fff69db0-b1fb-4cfe-aa39-a249e2f68977\") " pod="openstack/mariadb-client" Dec 05 08:10:38 crc kubenswrapper[4780]: I1205 08:10:38.439200 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drf5\" (UniqueName: \"kubernetes.io/projected/fff69db0-b1fb-4cfe-aa39-a249e2f68977-kube-api-access-8drf5\") pod \"mariadb-client\" (UID: \"fff69db0-b1fb-4cfe-aa39-a249e2f68977\") " pod="openstack/mariadb-client" Dec 05 08:10:38 crc kubenswrapper[4780]: I1205 08:10:38.607502 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 05 08:10:39 crc kubenswrapper[4780]: I1205 08:10:39.039834 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 05 08:10:39 crc kubenswrapper[4780]: I1205 08:10:39.499693 4780 generic.go:334] "Generic (PLEG): container finished" podID="fff69db0-b1fb-4cfe-aa39-a249e2f68977" containerID="975196b32aa253c6dcf1678c7d1bbb570926fe291bf4e3a828c112e173d21bc7" exitCode=0 Dec 05 08:10:39 crc kubenswrapper[4780]: I1205 08:10:39.499781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fff69db0-b1fb-4cfe-aa39-a249e2f68977","Type":"ContainerDied","Data":"975196b32aa253c6dcf1678c7d1bbb570926fe291bf4e3a828c112e173d21bc7"} Dec 05 08:10:39 crc kubenswrapper[4780]: I1205 08:10:39.499982 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fff69db0-b1fb-4cfe-aa39-a249e2f68977","Type":"ContainerStarted","Data":"2a124c1795f2766ee5cf58085136891dbbe36abc00dc6af10736157be4ce9035"} Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.800327 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.827560 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_fff69db0-b1fb-4cfe-aa39-a249e2f68977/mariadb-client/0.log" Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.855674 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.857142 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8drf5\" (UniqueName: \"kubernetes.io/projected/fff69db0-b1fb-4cfe-aa39-a249e2f68977-kube-api-access-8drf5\") pod \"fff69db0-b1fb-4cfe-aa39-a249e2f68977\" (UID: \"fff69db0-b1fb-4cfe-aa39-a249e2f68977\") " Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.861724 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.863419 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff69db0-b1fb-4cfe-aa39-a249e2f68977-kube-api-access-8drf5" (OuterVolumeSpecName: "kube-api-access-8drf5") pod "fff69db0-b1fb-4cfe-aa39-a249e2f68977" (UID: "fff69db0-b1fb-4cfe-aa39-a249e2f68977"). InnerVolumeSpecName "kube-api-access-8drf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.958838 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8drf5\" (UniqueName: \"kubernetes.io/projected/fff69db0-b1fb-4cfe-aa39-a249e2f68977-kube-api-access-8drf5\") on node \"crc\" DevicePath \"\"" Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.975629 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Dec 05 08:10:40 crc kubenswrapper[4780]: E1205 08:10:40.976215 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff69db0-b1fb-4cfe-aa39-a249e2f68977" containerName="mariadb-client" Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.976287 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff69db0-b1fb-4cfe-aa39-a249e2f68977" containerName="mariadb-client" Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.976511 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff69db0-b1fb-4cfe-aa39-a249e2f68977" containerName="mariadb-client" Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.977160 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 05 08:10:40 crc kubenswrapper[4780]: I1205 08:10:40.985678 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 05 08:10:41 crc kubenswrapper[4780]: I1205 08:10:41.061020 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jm8s\" (UniqueName: \"kubernetes.io/projected/fe24d55e-0206-4b28-80e9-dcb22b9229d5-kube-api-access-2jm8s\") pod \"mariadb-client\" (UID: \"fe24d55e-0206-4b28-80e9-dcb22b9229d5\") " pod="openstack/mariadb-client" Dec 05 08:10:41 crc kubenswrapper[4780]: I1205 08:10:41.162687 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jm8s\" (UniqueName: \"kubernetes.io/projected/fe24d55e-0206-4b28-80e9-dcb22b9229d5-kube-api-access-2jm8s\") pod \"mariadb-client\" (UID: \"fe24d55e-0206-4b28-80e9-dcb22b9229d5\") " pod="openstack/mariadb-client" Dec 05 08:10:41 crc kubenswrapper[4780]: I1205 08:10:41.184887 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jm8s\" (UniqueName: \"kubernetes.io/projected/fe24d55e-0206-4b28-80e9-dcb22b9229d5-kube-api-access-2jm8s\") pod \"mariadb-client\" (UID: \"fe24d55e-0206-4b28-80e9-dcb22b9229d5\") " pod="openstack/mariadb-client" Dec 05 08:10:41 crc kubenswrapper[4780]: I1205 08:10:41.297270 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 05 08:10:41 crc kubenswrapper[4780]: I1205 08:10:41.515043 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a124c1795f2766ee5cf58085136891dbbe36abc00dc6af10736157be4ce9035" Dec 05 08:10:41 crc kubenswrapper[4780]: I1205 08:10:41.515124 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 05 08:10:41 crc kubenswrapper[4780]: I1205 08:10:41.531921 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="fff69db0-b1fb-4cfe-aa39-a249e2f68977" podUID="fe24d55e-0206-4b28-80e9-dcb22b9229d5" Dec 05 08:10:41 crc kubenswrapper[4780]: I1205 08:10:41.713140 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 05 08:10:41 crc kubenswrapper[4780]: W1205 08:10:41.716476 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe24d55e_0206_4b28_80e9_dcb22b9229d5.slice/crio-a45a60e43eb5d7cc0abecfe85eed2eb9ead2efa0e64191229e6e7ca8d90ab20e WatchSource:0}: Error finding container a45a60e43eb5d7cc0abecfe85eed2eb9ead2efa0e64191229e6e7ca8d90ab20e: Status 404 returned error can't find the container with id a45a60e43eb5d7cc0abecfe85eed2eb9ead2efa0e64191229e6e7ca8d90ab20e Dec 05 08:10:42 crc kubenswrapper[4780]: I1205 08:10:42.146914 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff69db0-b1fb-4cfe-aa39-a249e2f68977" path="/var/lib/kubelet/pods/fff69db0-b1fb-4cfe-aa39-a249e2f68977/volumes" Dec 05 08:10:42 crc kubenswrapper[4780]: I1205 08:10:42.523752 4780 generic.go:334] "Generic (PLEG): container finished" podID="fe24d55e-0206-4b28-80e9-dcb22b9229d5" containerID="66a413f1b852c0cf348d3f8240cc8a1181f92ea2d1848f6778eca7d07445c247" exitCode=0 Dec 05 08:10:42 crc kubenswrapper[4780]: I1205 08:10:42.523791 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe24d55e-0206-4b28-80e9-dcb22b9229d5","Type":"ContainerDied","Data":"66a413f1b852c0cf348d3f8240cc8a1181f92ea2d1848f6778eca7d07445c247"} Dec 05 08:10:42 crc kubenswrapper[4780]: I1205 08:10:42.523829 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe24d55e-0206-4b28-80e9-dcb22b9229d5","Type":"ContainerStarted","Data":"a45a60e43eb5d7cc0abecfe85eed2eb9ead2efa0e64191229e6e7ca8d90ab20e"} Dec 05 08:10:43 crc kubenswrapper[4780]: I1205 08:10:43.799635 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 05 08:10:43 crc kubenswrapper[4780]: I1205 08:10:43.817764 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_fe24d55e-0206-4b28-80e9-dcb22b9229d5/mariadb-client/0.log" Dec 05 08:10:43 crc kubenswrapper[4780]: I1205 08:10:43.855930 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Dec 05 08:10:43 crc kubenswrapper[4780]: I1205 08:10:43.862209 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Dec 05 08:10:43 crc kubenswrapper[4780]: I1205 08:10:43.904578 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jm8s\" (UniqueName: \"kubernetes.io/projected/fe24d55e-0206-4b28-80e9-dcb22b9229d5-kube-api-access-2jm8s\") pod \"fe24d55e-0206-4b28-80e9-dcb22b9229d5\" (UID: \"fe24d55e-0206-4b28-80e9-dcb22b9229d5\") " Dec 05 08:10:43 crc kubenswrapper[4780]: I1205 08:10:43.915059 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe24d55e-0206-4b28-80e9-dcb22b9229d5-kube-api-access-2jm8s" (OuterVolumeSpecName: "kube-api-access-2jm8s") pod "fe24d55e-0206-4b28-80e9-dcb22b9229d5" (UID: "fe24d55e-0206-4b28-80e9-dcb22b9229d5"). InnerVolumeSpecName "kube-api-access-2jm8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:10:44 crc kubenswrapper[4780]: I1205 08:10:44.008370 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jm8s\" (UniqueName: \"kubernetes.io/projected/fe24d55e-0206-4b28-80e9-dcb22b9229d5-kube-api-access-2jm8s\") on node \"crc\" DevicePath \"\"" Dec 05 08:10:44 crc kubenswrapper[4780]: I1205 08:10:44.148869 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe24d55e-0206-4b28-80e9-dcb22b9229d5" path="/var/lib/kubelet/pods/fe24d55e-0206-4b28-80e9-dcb22b9229d5/volumes" Dec 05 08:10:44 crc kubenswrapper[4780]: I1205 08:10:44.539444 4780 scope.go:117] "RemoveContainer" containerID="66a413f1b852c0cf348d3f8240cc8a1181f92ea2d1848f6778eca7d07445c247" Dec 05 08:10:44 crc kubenswrapper[4780]: I1205 08:10:44.539479 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.066727 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 08:11:14 crc kubenswrapper[4780]: E1205 08:11:14.067617 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe24d55e-0206-4b28-80e9-dcb22b9229d5" containerName="mariadb-client" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.067630 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe24d55e-0206-4b28-80e9-dcb22b9229d5" containerName="mariadb-client" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.067766 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe24d55e-0206-4b28-80e9-dcb22b9229d5" containerName="mariadb-client" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.068552 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.070207 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.070826 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.072218 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.072285 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bk6vl" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.073038 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.099105 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.107428 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.124975 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.137431 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.139600 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.154458 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.159215 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.223485 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659b842c-8e50-40ac-9b42-de934ff34209-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.223579 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f0472df-b002-413c-afc1-28c9e0101566-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.223697 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0472df-b002-413c-afc1-28c9e0101566-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.223719 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/659b842c-8e50-40ac-9b42-de934ff34209-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.223744 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f0472df-b002-413c-afc1-28c9e0101566-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.223771 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/659b842c-8e50-40ac-9b42-de934ff34209-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.224053 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0472df-b002-413c-afc1-28c9e0101566-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.224125 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659b842c-8e50-40ac-9b42-de934ff34209-config\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.224188 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4922e055-65a8-4fe6-8755-1cfc35355edf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4922e055-65a8-4fe6-8755-1cfc35355edf\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.224238 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjh2\" (UniqueName: \"kubernetes.io/projected/659b842c-8e50-40ac-9b42-de934ff34209-kube-api-access-gkjh2\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.224272 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0472df-b002-413c-afc1-28c9e0101566-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.224379 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/659b842c-8e50-40ac-9b42-de934ff34209-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.224477 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/659b842c-8e50-40ac-9b42-de934ff34209-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.224507 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f0472df-b002-413c-afc1-28c9e0101566-config\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.224529 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhmt2\" (UniqueName: \"kubernetes.io/projected/2f0472df-b002-413c-afc1-28c9e0101566-kube-api-access-dhmt2\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.224599 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8cb85eba-c265-4f41-8518-da088162943c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8cb85eba-c265-4f41-8518-da088162943c\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.270301 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.271976 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.275449 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8gzlk" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.275547 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.275609 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.276309 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.287413 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.302040 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.303719 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.311003 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.312714 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.317543 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.329123 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330017 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttm7n\" (UniqueName: \"kubernetes.io/projected/b17ee026-2831-4330-a9ff-92edb8901c90-kube-api-access-ttm7n\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330061 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d6628f4e-58a2-4b49-940f-040aa12d7c7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6628f4e-58a2-4b49-940f-040aa12d7c7e\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330095 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b17ee026-2831-4330-a9ff-92edb8901c90-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330120 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b17ee026-2831-4330-a9ff-92edb8901c90-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330148 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17ee026-2831-4330-a9ff-92edb8901c90-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330177 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0472df-b002-413c-afc1-28c9e0101566-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330216 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659b842c-8e50-40ac-9b42-de934ff34209-config\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330244 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4922e055-65a8-4fe6-8755-1cfc35355edf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4922e055-65a8-4fe6-8755-1cfc35355edf\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330273 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjh2\" (UniqueName: \"kubernetes.io/projected/659b842c-8e50-40ac-9b42-de934ff34209-kube-api-access-gkjh2\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330297 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0472df-b002-413c-afc1-28c9e0101566-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330351 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/659b842c-8e50-40ac-9b42-de934ff34209-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330401 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/659b842c-8e50-40ac-9b42-de934ff34209-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330421 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f0472df-b002-413c-afc1-28c9e0101566-config\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330442 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhmt2\" (UniqueName: \"kubernetes.io/projected/2f0472df-b002-413c-afc1-28c9e0101566-kube-api-access-dhmt2\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330473 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8cb85eba-c265-4f41-8518-da088162943c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8cb85eba-c265-4f41-8518-da088162943c\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330502 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659b842c-8e50-40ac-9b42-de934ff34209-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330526 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f0472df-b002-413c-afc1-28c9e0101566-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330558 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17ee026-2831-4330-a9ff-92edb8901c90-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330585 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17ee026-2831-4330-a9ff-92edb8901c90-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330618 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0472df-b002-413c-afc1-28c9e0101566-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330645 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/659b842c-8e50-40ac-9b42-de934ff34209-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330675 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f0472df-b002-413c-afc1-28c9e0101566-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330701 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b17ee026-2831-4330-a9ff-92edb8901c90-config\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.330739 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/659b842c-8e50-40ac-9b42-de934ff34209-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.335572 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659b842c-8e50-40ac-9b42-de934ff34209-config\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.336274 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/659b842c-8e50-40ac-9b42-de934ff34209-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.340306 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f0472df-b002-413c-afc1-28c9e0101566-config\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.340453 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659b842c-8e50-40ac-9b42-de934ff34209-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.340548 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0472df-b002-413c-afc1-28c9e0101566-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.341384 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/659b842c-8e50-40ac-9b42-de934ff34209-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.343815 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/659b842c-8e50-40ac-9b42-de934ff34209-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.345054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2f0472df-b002-413c-afc1-28c9e0101566-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.345116 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f0472df-b002-413c-afc1-28c9e0101566-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.345729 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.359066 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4922e055-65a8-4fe6-8755-1cfc35355edf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4922e055-65a8-4fe6-8755-1cfc35355edf\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b0a584bc999bc062525cc29140ce7acf85cef8c62faca1d3940185dae65078cf/globalmount\"" pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.354609 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhmt2\" (UniqueName: \"kubernetes.io/projected/2f0472df-b002-413c-afc1-28c9e0101566-kube-api-access-dhmt2\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.358677 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0472df-b002-413c-afc1-28c9e0101566-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.358911 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0472df-b002-413c-afc1-28c9e0101566-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.346074 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.361535 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8cb85eba-c265-4f41-8518-da088162943c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8cb85eba-c265-4f41-8518-da088162943c\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3e4695f22c037dffa88f03a783c9b6f4fd83c81462e0f035314b0dbf5a865cd6/globalmount\"" pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.348439 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/659b842c-8e50-40ac-9b42-de934ff34209-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.359744 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjh2\" (UniqueName: \"kubernetes.io/projected/659b842c-8e50-40ac-9b42-de934ff34209-kube-api-access-gkjh2\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.394966 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4922e055-65a8-4fe6-8755-1cfc35355edf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4922e055-65a8-4fe6-8755-1cfc35355edf\") pod \"ovsdbserver-nb-1\" (UID: \"659b842c-8e50-40ac-9b42-de934ff34209\") " pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.398273 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8cb85eba-c265-4f41-8518-da088162943c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8cb85eba-c265-4f41-8518-da088162943c\") pod \"ovsdbserver-nb-0\" (UID: \"2f0472df-b002-413c-afc1-28c9e0101566\") " pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.407710 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.428437 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432180 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-93b44790-1560-467c-b2a6-1edf044a95c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93b44790-1560-467c-b2a6-1edf044a95c0\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432229 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432255 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c6417d4-b13a-49fb-84fd-8b8b694fe781-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432282 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432300 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432325 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7e70831-918c-46ba-8b8c-60cce76cd409\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7e70831-918c-46ba-8b8c-60cce76cd409\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432342 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-config\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432364 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c6417d4-b13a-49fb-84fd-8b8b694fe781-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432390 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17ee026-2831-4330-a9ff-92edb8901c90-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432411 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17ee026-2831-4330-a9ff-92edb8901c90-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432427 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24170560-10d5-4ffe-b699-0cf14104ef10-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432454 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24170560-10d5-4ffe-b699-0cf14104ef10-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432474 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b17ee026-2831-4330-a9ff-92edb8901c90-config\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432494 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hs5b\" (UniqueName: \"kubernetes.io/projected/24170560-10d5-4ffe-b699-0cf14104ef10-kube-api-access-4hs5b\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432520 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrp5k\" (UniqueName: \"kubernetes.io/projected/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-kube-api-access-zrp5k\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432540 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24170560-10d5-4ffe-b699-0cf14104ef10-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432562 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6417d4-b13a-49fb-84fd-8b8b694fe781-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432582 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttm7n\" (UniqueName: \"kubernetes.io/projected/b17ee026-2831-4330-a9ff-92edb8901c90-kube-api-access-ttm7n\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432602 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d8240aa-19f0-4ad3-a932-a74d07aeb0e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d8240aa-19f0-4ad3-a932-a74d07aeb0e4\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.432619 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6417d4-b13a-49fb-84fd-8b8b694fe781-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433048 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24170560-10d5-4ffe-b699-0cf14104ef10-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433066 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6417d4-b13a-49fb-84fd-8b8b694fe781-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433094 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d6628f4e-58a2-4b49-940f-040aa12d7c7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6628f4e-58a2-4b49-940f-040aa12d7c7e\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433113 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b17ee026-2831-4330-a9ff-92edb8901c90-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433134 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b17ee026-2831-4330-a9ff-92edb8901c90-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433153 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24170560-10d5-4ffe-b699-0cf14104ef10-config\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433170 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24170560-10d5-4ffe-b699-0cf14104ef10-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433192 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17ee026-2831-4330-a9ff-92edb8901c90-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433215 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433232 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gng8j\" (UniqueName: \"kubernetes.io/projected/0c6417d4-b13a-49fb-84fd-8b8b694fe781-kube-api-access-gng8j\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433257 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c6417d4-b13a-49fb-84fd-8b8b694fe781-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.433291 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.434225 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b17ee026-2831-4330-a9ff-92edb8901c90-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.434520 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b17ee026-2831-4330-a9ff-92edb8901c90-config\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.436672 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b17ee026-2831-4330-a9ff-92edb8901c90-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.444371 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17ee026-2831-4330-a9ff-92edb8901c90-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.445686 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b17ee026-2831-4330-a9ff-92edb8901c90-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.445857 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17ee026-2831-4330-a9ff-92edb8901c90-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.446181 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.446319 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d6628f4e-58a2-4b49-940f-040aa12d7c7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6628f4e-58a2-4b49-940f-040aa12d7c7e\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/416cea24b406a0bfd73fa7b8402c3c19492350d8662c650a8654df5ab5d90f6e/globalmount\"" pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.457962 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttm7n\" (UniqueName: \"kubernetes.io/projected/b17ee026-2831-4330-a9ff-92edb8901c90-kube-api-access-ttm7n\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.502412 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d6628f4e-58a2-4b49-940f-040aa12d7c7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6628f4e-58a2-4b49-940f-040aa12d7c7e\") pod \"ovsdbserver-nb-2\" (UID: \"b17ee026-2831-4330-a9ff-92edb8901c90\") " pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.535339 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c6417d4-b13a-49fb-84fd-8b8b694fe781-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.535420 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536194 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-93b44790-1560-467c-b2a6-1edf044a95c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93b44790-1560-467c-b2a6-1edf044a95c0\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536233 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536252 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c6417d4-b13a-49fb-84fd-8b8b694fe781-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536279 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536305 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536345 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7e70831-918c-46ba-8b8c-60cce76cd409\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7e70831-918c-46ba-8b8c-60cce76cd409\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536369 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-config\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536397 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c6417d4-b13a-49fb-84fd-8b8b694fe781-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536427 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24170560-10d5-4ffe-b699-0cf14104ef10-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536456 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24170560-10d5-4ffe-b699-0cf14104ef10-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536496 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hs5b\" (UniqueName: \"kubernetes.io/projected/24170560-10d5-4ffe-b699-0cf14104ef10-kube-api-access-4hs5b\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536517 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrp5k\" (UniqueName: \"kubernetes.io/projected/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-kube-api-access-zrp5k\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536538 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24170560-10d5-4ffe-b699-0cf14104ef10-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.536562 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6417d4-b13a-49fb-84fd-8b8b694fe781-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.537115 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d8240aa-19f0-4ad3-a932-a74d07aeb0e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d8240aa-19f0-4ad3-a932-a74d07aeb0e4\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.537142 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6417d4-b13a-49fb-84fd-8b8b694fe781-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.537160 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24170560-10d5-4ffe-b699-0cf14104ef10-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.537177 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6417d4-b13a-49fb-84fd-8b8b694fe781-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.537198 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24170560-10d5-4ffe-b699-0cf14104ef10-config\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.537214 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24170560-10d5-4ffe-b699-0cf14104ef10-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.537260 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.537280 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gng8j\" (UniqueName: \"kubernetes.io/projected/0c6417d4-b13a-49fb-84fd-8b8b694fe781-kube-api-access-gng8j\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.537800 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-config\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.540204 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24170560-10d5-4ffe-b699-0cf14104ef10-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.541024 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24170560-10d5-4ffe-b699-0cf14104ef10-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.541086 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c6417d4-b13a-49fb-84fd-8b8b694fe781-config\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.541162 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24170560-10d5-4ffe-b699-0cf14104ef10-config\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.542580 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.543140 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.544283 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.544391 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c6417d4-b13a-49fb-84fd-8b8b694fe781-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.545249 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.545288 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7e70831-918c-46ba-8b8c-60cce76cd409\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7e70831-918c-46ba-8b8c-60cce76cd409\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a813f1c8e1fc8b91d1ac127ff5e9129e191310e0c87ed27afc44eb4f48ae4580/globalmount\"" pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.546074 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.546222 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24170560-10d5-4ffe-b699-0cf14104ef10-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.546565 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24170560-10d5-4ffe-b699-0cf14104ef10-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.547219 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.547275 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-93b44790-1560-467c-b2a6-1edf044a95c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93b44790-1560-467c-b2a6-1edf044a95c0\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4d801cba228b48ceb0f1a7a057d8050880e447361f09c414bb9f5390b312f7d8/globalmount\"" pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.553795 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.553845 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d8240aa-19f0-4ad3-a932-a74d07aeb0e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d8240aa-19f0-4ad3-a932-a74d07aeb0e4\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9543451bd04d696effa841c8adec11f13e8c1cf0e57836d5a1065fea155de952/globalmount\"" pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.556093 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.560799 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gng8j\" (UniqueName: \"kubernetes.io/projected/0c6417d4-b13a-49fb-84fd-8b8b694fe781-kube-api-access-gng8j\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.561449 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6417d4-b13a-49fb-84fd-8b8b694fe781-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.562253 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6417d4-b13a-49fb-84fd-8b8b694fe781-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.563510 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0c6417d4-b13a-49fb-84fd-8b8b694fe781-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.564139 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrp5k\" (UniqueName: \"kubernetes.io/projected/1cd288c0-8daf-49b6-8150-0e20c2cd58f0-kube-api-access-zrp5k\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.565733 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24170560-10d5-4ffe-b699-0cf14104ef10-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.567671 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c6417d4-b13a-49fb-84fd-8b8b694fe781-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.568312 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hs5b\" (UniqueName: \"kubernetes.io/projected/24170560-10d5-4ffe-b699-0cf14104ef10-kube-api-access-4hs5b\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.594867 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7e70831-918c-46ba-8b8c-60cce76cd409\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7e70831-918c-46ba-8b8c-60cce76cd409\") pod \"ovsdbserver-sb-1\" (UID: \"24170560-10d5-4ffe-b699-0cf14104ef10\") " pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.612906 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-93b44790-1560-467c-b2a6-1edf044a95c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93b44790-1560-467c-b2a6-1edf044a95c0\") pod \"ovsdbserver-sb-0\" (UID: \"0c6417d4-b13a-49fb-84fd-8b8b694fe781\") " pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.614042 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d8240aa-19f0-4ad3-a932-a74d07aeb0e4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d8240aa-19f0-4ad3-a932-a74d07aeb0e4\") pod \"ovsdbserver-sb-2\" (UID: \"1cd288c0-8daf-49b6-8150-0e20c2cd58f0\") " pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.630471 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.660493 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.756676 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.902407 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:14 crc kubenswrapper[4780]: I1205 08:11:14.953781 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 08:11:15 crc kubenswrapper[4780]: I1205 08:11:15.154010 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 05 08:11:15 crc kubenswrapper[4780]: I1205 08:11:15.250692 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 05 08:11:15 crc kubenswrapper[4780]: W1205 08:11:15.261956 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24170560_10d5_4ffe_b699_0cf14104ef10.slice/crio-b7296f0e03984495ca32d7cbb66c85873d92a9d842b82bcca95fc46433bfaf47 WatchSource:0}: Error finding container b7296f0e03984495ca32d7cbb66c85873d92a9d842b82bcca95fc46433bfaf47: Status 404 returned error can't find the container with id b7296f0e03984495ca32d7cbb66c85873d92a9d842b82bcca95fc46433bfaf47 Dec 05 08:11:15 crc kubenswrapper[4780]: I1205 08:11:15.395499 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 05 08:11:15 crc kubenswrapper[4780]: W1205 08:11:15.414079 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb17ee026_2831_4330_a9ff_92edb8901c90.slice/crio-a67aa99ffa6662a9faf8f022c715f684b1e86cc6b744fa4a6f94b5695439bc45 WatchSource:0}: Error finding container a67aa99ffa6662a9faf8f022c715f684b1e86cc6b744fa4a6f94b5695439bc45: Status 404 returned error can't find the container with id a67aa99ffa6662a9faf8f022c715f684b1e86cc6b744fa4a6f94b5695439bc45 Dec 05 08:11:15 crc kubenswrapper[4780]: I1205 08:11:15.497190 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 08:11:15 crc kubenswrapper[4780]: I1205 08:11:15.813914 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c6417d4-b13a-49fb-84fd-8b8b694fe781","Type":"ContainerStarted","Data":"b1314fed4522cb439062f23fe9f1ce8bd77ecc9e922a0efc7403085b2c9629e1"} Dec 05 08:11:15 crc kubenswrapper[4780]: I1205 08:11:15.815594 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"659b842c-8e50-40ac-9b42-de934ff34209","Type":"ContainerStarted","Data":"f5cada464e250a56b89e1136f8a15ce70e24679f2ae61bb582a0cf606f6f15f5"} Dec 05 08:11:15 crc kubenswrapper[4780]: I1205 08:11:15.816781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"b17ee026-2831-4330-a9ff-92edb8901c90","Type":"ContainerStarted","Data":"a67aa99ffa6662a9faf8f022c715f684b1e86cc6b744fa4a6f94b5695439bc45"} Dec 05 08:11:15 crc kubenswrapper[4780]: I1205 08:11:15.819645 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2f0472df-b002-413c-afc1-28c9e0101566","Type":"ContainerStarted","Data":"dadf50d281278955fbc22732b1fce296000ec6696d5c4ffdaa796aa6b6e24f06"} Dec 05 08:11:15 crc kubenswrapper[4780]: I1205 08:11:15.837064 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"24170560-10d5-4ffe-b699-0cf14104ef10","Type":"ContainerStarted","Data":"b7296f0e03984495ca32d7cbb66c85873d92a9d842b82bcca95fc46433bfaf47"} Dec 05 08:11:16 crc kubenswrapper[4780]: I1205 08:11:16.275686 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 05 08:11:16 crc kubenswrapper[4780]: W1205 08:11:16.287949 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cd288c0_8daf_49b6_8150_0e20c2cd58f0.slice/crio-46312189123dd2f1e7b7ee347e531dd1e6709b6578dfd87830e8a9fcab75afed WatchSource:0}: Error finding container 46312189123dd2f1e7b7ee347e531dd1e6709b6578dfd87830e8a9fcab75afed: Status 404 returned error can't find the container with id 46312189123dd2f1e7b7ee347e531dd1e6709b6578dfd87830e8a9fcab75afed Dec 05 08:11:16 crc kubenswrapper[4780]: I1205 08:11:16.849194 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"1cd288c0-8daf-49b6-8150-0e20c2cd58f0","Type":"ContainerStarted","Data":"46312189123dd2f1e7b7ee347e531dd1e6709b6578dfd87830e8a9fcab75afed"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.879196 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c6417d4-b13a-49fb-84fd-8b8b694fe781","Type":"ContainerStarted","Data":"1c403745e47dbef55d51c093b42cda6e6622807e5d0faa8519a47cd8299fd658"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.879749 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0c6417d4-b13a-49fb-84fd-8b8b694fe781","Type":"ContainerStarted","Data":"3a349400fc91dcb619c6e2269ab5b4b7d0810f2eb84eb5ecbf3b1a268ae06a47"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.881611 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"659b842c-8e50-40ac-9b42-de934ff34209","Type":"ContainerStarted","Data":"77343cbc17c2c6f09c01c2722f107afe30c22b939cb5b05ade915645ce13338b"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.881670 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"659b842c-8e50-40ac-9b42-de934ff34209","Type":"ContainerStarted","Data":"80a3e3cebaa231dafd2ee2bb21608b540c1484b1294f95535ebad4d371727c80"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.883457 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"b17ee026-2831-4330-a9ff-92edb8901c90","Type":"ContainerStarted","Data":"58b2cf5a9f6ca25e1a86a963b8389abb168edcf83e4c2a05d08630f9b76ef1fe"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.883491 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"b17ee026-2831-4330-a9ff-92edb8901c90","Type":"ContainerStarted","Data":"e2bc7e02b53a45b991073ee9fd775b6804f337d87b341628dec8c883557d70ee"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.886487 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"1cd288c0-8daf-49b6-8150-0e20c2cd58f0","Type":"ContainerStarted","Data":"0eb40b7e92d3254c6dce5224d17376a7db5954fdad715c4e98f6afeb76ba065b"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.886527 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"1cd288c0-8daf-49b6-8150-0e20c2cd58f0","Type":"ContainerStarted","Data":"9e80be973c0c746d02df5aba45889ac69ab503fbc7857db60f8c4807d8c07995"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.888850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2f0472df-b002-413c-afc1-28c9e0101566","Type":"ContainerStarted","Data":"5fe4e518b5793d42570b63263405cdb71a2f4af71f963761e69060fa76b3df41"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.888908 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2f0472df-b002-413c-afc1-28c9e0101566","Type":"ContainerStarted","Data":"e7e601209e093d6b46bef4ec81835aa40631d0754cd6237bc52298ae755d48a3"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.891128 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"24170560-10d5-4ffe-b699-0cf14104ef10","Type":"ContainerStarted","Data":"706b7a95ddd24e243e74263cee03561941c614ad0ab76ddcd328614d7d657c38"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.891154 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"24170560-10d5-4ffe-b699-0cf14104ef10","Type":"ContainerStarted","Data":"071561864d7fa65c023979186349e93a722ea99ad56d47d55e35ed905a806f07"} Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.901513 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.747328918 podStartE2EDuration="7.901495902s" podCreationTimestamp="2025-12-05 08:11:13 +0000 UTC" firstStartedPulling="2025-12-05 08:11:15.512380613 +0000 UTC m=+5109.581896935" lastFinishedPulling="2025-12-05 08:11:19.666547587 +0000 UTC m=+5113.736063919" observedRunningTime="2025-12-05 08:11:20.899500717 +0000 UTC m=+5114.969017049" watchObservedRunningTime="2025-12-05 08:11:20.901495902 +0000 UTC m=+5114.971012234" Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.902811 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.926686 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.247774252 podStartE2EDuration="7.926666806s" podCreationTimestamp="2025-12-05 08:11:13 +0000 UTC" firstStartedPulling="2025-12-05 08:11:14.974217687 +0000 UTC m=+5109.043734019" lastFinishedPulling="2025-12-05 08:11:19.653110221 +0000 UTC m=+5113.722626573" observedRunningTime="2025-12-05 08:11:20.924041715 +0000 UTC m=+5114.993558067" watchObservedRunningTime="2025-12-05 08:11:20.926666806 +0000 UTC m=+5114.996183148" Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.949192 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.706464836 podStartE2EDuration="7.949170008s" podCreationTimestamp="2025-12-05 08:11:13 +0000 UTC" firstStartedPulling="2025-12-05 08:11:15.416240908 +0000 UTC m=+5109.485757240" lastFinishedPulling="2025-12-05 08:11:19.65894604 +0000 UTC m=+5113.728462412" observedRunningTime="2025-12-05 08:11:20.942485866 +0000 UTC m=+5115.012002228" watchObservedRunningTime="2025-12-05 08:11:20.949170008 +0000 UTC m=+5115.018686340" Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.964611 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.453191259 podStartE2EDuration="7.964593038s" podCreationTimestamp="2025-12-05 08:11:13 +0000 UTC" firstStartedPulling="2025-12-05 08:11:15.160608266 +0000 UTC m=+5109.230124598" lastFinishedPulling="2025-12-05 08:11:19.672010045 +0000 UTC m=+5113.741526377" observedRunningTime="2025-12-05 08:11:20.960954588 +0000 UTC m=+5115.030470930" watchObservedRunningTime="2025-12-05 08:11:20.964593038 +0000 UTC m=+5115.034109390" Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.980753 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.592307557 podStartE2EDuration="7.980737606s" podCreationTimestamp="2025-12-05 08:11:13 +0000 UTC" firstStartedPulling="2025-12-05 08:11:16.290924666 +0000 UTC m=+5110.360440988" lastFinishedPulling="2025-12-05 08:11:19.679354705 +0000 UTC m=+5113.748871037" observedRunningTime="2025-12-05 08:11:20.977961411 +0000 UTC m=+5115.047477753" watchObservedRunningTime="2025-12-05 08:11:20.980737606 +0000 UTC m=+5115.050253938" Dec 05 08:11:20 crc kubenswrapper[4780]: I1205 08:11:20.999173 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.606470747 podStartE2EDuration="7.999154647s" podCreationTimestamp="2025-12-05 08:11:13 +0000 UTC" firstStartedPulling="2025-12-05 08:11:15.266184558 +0000 UTC m=+5109.335700880" lastFinishedPulling="2025-12-05 08:11:19.658868448 +0000 UTC m=+5113.728384780" observedRunningTime="2025-12-05 08:11:20.993810732 +0000 UTC m=+5115.063327084" watchObservedRunningTime="2025-12-05 08:11:20.999154647 +0000 UTC m=+5115.068670969" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.408869 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.428970 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.444641 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.468099 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.632249 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.660969 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.665463 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.701476 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.757105 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.791687 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.911005 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.911051 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.911396 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.911561 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.911665 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.940969 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:23 crc kubenswrapper[4780]: I1205 08:11:23.941452 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.465518 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.677189 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.741958 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-758bb8fb87-l68kb"] Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.743797 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.746346 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.747830 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758bb8fb87-l68kb"] Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.816442 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-config\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.816488 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-ovsdbserver-nb\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.816537 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqgzv\" (UniqueName: \"kubernetes.io/projected/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-kube-api-access-xqgzv\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.816580 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-dns-svc\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.918571 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-dns-svc\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.918675 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-config\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.918697 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-ovsdbserver-nb\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.918750 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqgzv\" (UniqueName: \"kubernetes.io/projected/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-kube-api-access-xqgzv\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.921352 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-ovsdbserver-nb\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.921814 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-dns-svc\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.921944 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-config\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:24 crc kubenswrapper[4780]: I1205 08:11:24.947475 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqgzv\" (UniqueName: \"kubernetes.io/projected/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-kube-api-access-xqgzv\") pod \"dnsmasq-dns-758bb8fb87-l68kb\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.020137 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758bb8fb87-l68kb"] Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.020913 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.069284 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bc8d6fdd7-9qnr6"] Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.070571 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.073168 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.085623 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc8d6fdd7-9qnr6"] Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.123948 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-dns-svc\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.123998 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-config\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.124038 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-sb\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.124064 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-nb\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.124148 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2cm\" (UniqueName: \"kubernetes.io/projected/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-kube-api-access-ld2cm\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.226117 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-config\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.226436 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-dns-svc\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.226505 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-sb\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.226545 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-nb\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.226575 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld2cm\" (UniqueName: \"kubernetes.io/projected/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-kube-api-access-ld2cm\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.229187 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-nb\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.229245 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-sb\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.229672 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-dns-svc\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.230070 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-config\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.249440 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld2cm\" (UniqueName: \"kubernetes.io/projected/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-kube-api-access-ld2cm\") pod \"dnsmasq-dns-bc8d6fdd7-9qnr6\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.417450 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.525800 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758bb8fb87-l68kb"] Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.847871 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc8d6fdd7-9qnr6"] Dec 05 08:11:25 crc kubenswrapper[4780]: W1205 08:11:25.865124 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf447a04_4ae4_43f8_84ed_6fc92c2a6445.slice/crio-3533eda012d92642f020bedfcdb5c6e3a237eb5458da4584955cb50de3f5c2df WatchSource:0}: Error finding container 3533eda012d92642f020bedfcdb5c6e3a237eb5458da4584955cb50de3f5c2df: Status 404 returned error can't find the container with id 3533eda012d92642f020bedfcdb5c6e3a237eb5458da4584955cb50de3f5c2df Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.939147 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" event={"ID":"cf447a04-4ae4-43f8-84ed-6fc92c2a6445","Type":"ContainerStarted","Data":"3533eda012d92642f020bedfcdb5c6e3a237eb5458da4584955cb50de3f5c2df"} Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.942144 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" event={"ID":"5a8cfd4b-a25a-4e85-867c-ffd501c0790d","Type":"ContainerStarted","Data":"ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08"} Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.942181 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" event={"ID":"5a8cfd4b-a25a-4e85-867c-ffd501c0790d","Type":"ContainerStarted","Data":"9387204599a57ab05eea2484c134549449e4a2d4144afe9f8465f6a073420f90"} Dec 05 08:11:25 crc kubenswrapper[4780]: I1205 08:11:25.942181 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" podUID="5a8cfd4b-a25a-4e85-867c-ffd501c0790d" containerName="init" containerID="cri-o://ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08" gracePeriod=10 Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.454293 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.462607 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-config\") pod \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.462667 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-dns-svc\") pod \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.462692 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-ovsdbserver-nb\") pod \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.462739 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqgzv\" (UniqueName: \"kubernetes.io/projected/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-kube-api-access-xqgzv\") pod \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\" (UID: \"5a8cfd4b-a25a-4e85-867c-ffd501c0790d\") " Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.467110 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-kube-api-access-xqgzv" (OuterVolumeSpecName: "kube-api-access-xqgzv") pod "5a8cfd4b-a25a-4e85-867c-ffd501c0790d" (UID: "5a8cfd4b-a25a-4e85-867c-ffd501c0790d"). InnerVolumeSpecName "kube-api-access-xqgzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.484353 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a8cfd4b-a25a-4e85-867c-ffd501c0790d" (UID: "5a8cfd4b-a25a-4e85-867c-ffd501c0790d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.486485 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a8cfd4b-a25a-4e85-867c-ffd501c0790d" (UID: "5a8cfd4b-a25a-4e85-867c-ffd501c0790d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.487521 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-config" (OuterVolumeSpecName: "config") pod "5a8cfd4b-a25a-4e85-867c-ffd501c0790d" (UID: "5a8cfd4b-a25a-4e85-867c-ffd501c0790d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.564477 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.564520 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.564536 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqgzv\" (UniqueName: \"kubernetes.io/projected/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-kube-api-access-xqgzv\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.564546 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a8cfd4b-a25a-4e85-867c-ffd501c0790d-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.951196 4780 generic.go:334] "Generic (PLEG): container finished" podID="cf447a04-4ae4-43f8-84ed-6fc92c2a6445" containerID="7896ab9304511dcde7af8f31b0ba82270faa37f22f2b2b14ac57832098562edd" exitCode=0 Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.951243 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" event={"ID":"cf447a04-4ae4-43f8-84ed-6fc92c2a6445","Type":"ContainerDied","Data":"7896ab9304511dcde7af8f31b0ba82270faa37f22f2b2b14ac57832098562edd"} Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.954521 4780 generic.go:334] "Generic (PLEG): container finished" podID="5a8cfd4b-a25a-4e85-867c-ffd501c0790d" containerID="ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08" exitCode=0 Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.954588 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" event={"ID":"5a8cfd4b-a25a-4e85-867c-ffd501c0790d","Type":"ContainerDied","Data":"ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08"} Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.954621 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" event={"ID":"5a8cfd4b-a25a-4e85-867c-ffd501c0790d","Type":"ContainerDied","Data":"9387204599a57ab05eea2484c134549449e4a2d4144afe9f8465f6a073420f90"} Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.954667 4780 scope.go:117] "RemoveContainer" containerID="ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08" Dec 05 08:11:26 crc kubenswrapper[4780]: I1205 08:11:26.954858 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758bb8fb87-l68kb" Dec 05 08:11:27 crc kubenswrapper[4780]: I1205 08:11:27.090067 4780 scope.go:117] "RemoveContainer" containerID="ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08" Dec 05 08:11:27 crc kubenswrapper[4780]: E1205 08:11:27.091275 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08\": container with ID starting with ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08 not found: ID does not exist" containerID="ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08" Dec 05 08:11:27 crc kubenswrapper[4780]: I1205 08:11:27.091343 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08"} err="failed to get container status \"ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08\": rpc error: code = NotFound desc = could not find container \"ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08\": container with ID starting with ab5db2a85486a087e048dc10d11ec0a1d8d7f8df0ab41141bee1ca90662bef08 not found: ID does not exist" Dec 05 08:11:27 crc kubenswrapper[4780]: I1205 08:11:27.129352 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758bb8fb87-l68kb"] Dec 05 08:11:27 crc kubenswrapper[4780]: I1205 08:11:27.135185 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-758bb8fb87-l68kb"] Dec 05 08:11:27 crc kubenswrapper[4780]: I1205 08:11:27.964269 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" event={"ID":"cf447a04-4ae4-43f8-84ed-6fc92c2a6445","Type":"ContainerStarted","Data":"822fec1d5af7ad72030d3fdd8273f0c3273d45643e6bf70ad162b906ebe51fc6"} Dec 05 08:11:27 crc kubenswrapper[4780]: I1205 08:11:27.964409 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:27 crc kubenswrapper[4780]: I1205 08:11:27.987329 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" podStartSLOduration=2.987308462 podStartE2EDuration="2.987308462s" podCreationTimestamp="2025-12-05 08:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:11:27.980130577 +0000 UTC m=+5122.049646919" watchObservedRunningTime="2025-12-05 08:11:27.987308462 +0000 UTC m=+5122.056824794" Dec 05 08:11:28 crc kubenswrapper[4780]: I1205 08:11:28.148613 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a8cfd4b-a25a-4e85-867c-ffd501c0790d" path="/var/lib/kubelet/pods/5a8cfd4b-a25a-4e85-867c-ffd501c0790d/volumes" Dec 05 08:11:29 crc kubenswrapper[4780]: I1205 08:11:29.468333 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 05 08:11:29 crc kubenswrapper[4780]: I1205 08:11:29.700989 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Dec 05 08:11:29 crc kubenswrapper[4780]: I1205 08:11:29.802492 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Dec 05 08:11:29 crc kubenswrapper[4780]: I1205 08:11:29.954845 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.531199 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Dec 05 08:11:32 crc kubenswrapper[4780]: E1205 08:11:32.531890 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a8cfd4b-a25a-4e85-867c-ffd501c0790d" containerName="init" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.531904 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a8cfd4b-a25a-4e85-867c-ffd501c0790d" containerName="init" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.532088 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a8cfd4b-a25a-4e85-867c-ffd501c0790d" containerName="init" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.532668 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.539685 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.542448 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.668375 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/b26313a3-240f-4139-87cd-8002f9f36c02-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.668485 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5hpt\" (UniqueName: \"kubernetes.io/projected/b26313a3-240f-4139-87cd-8002f9f36c02-kube-api-access-c5hpt\") pod \"ovn-copy-data\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.668543 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\") pod \"ovn-copy-data\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.770030 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/b26313a3-240f-4139-87cd-8002f9f36c02-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.770163 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5hpt\" (UniqueName: \"kubernetes.io/projected/b26313a3-240f-4139-87cd-8002f9f36c02-kube-api-access-c5hpt\") pod \"ovn-copy-data\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.770254 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\") pod \"ovn-copy-data\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.772624 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.772665 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\") pod \"ovn-copy-data\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b2d0ce1197f51bea28f13943ab56e4aaa469ef728fef9fee132d81675074f1fc/globalmount\"" pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.777995 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/b26313a3-240f-4139-87cd-8002f9f36c02-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.790908 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5hpt\" (UniqueName: \"kubernetes.io/projected/b26313a3-240f-4139-87cd-8002f9f36c02-kube-api-access-c5hpt\") pod \"ovn-copy-data\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.822464 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\") pod \"ovn-copy-data\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " pod="openstack/ovn-copy-data" Dec 05 08:11:32 crc kubenswrapper[4780]: I1205 08:11:32.853339 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 05 08:11:33 crc kubenswrapper[4780]: I1205 08:11:33.361621 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Dec 05 08:11:34 crc kubenswrapper[4780]: I1205 08:11:34.014662 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"b26313a3-240f-4139-87cd-8002f9f36c02","Type":"ContainerStarted","Data":"0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac"} Dec 05 08:11:34 crc kubenswrapper[4780]: I1205 08:11:34.015004 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"b26313a3-240f-4139-87cd-8002f9f36c02","Type":"ContainerStarted","Data":"b1cc14b1f513727d5e576b53539e3d163178d86b08cf0f428838be01868eb690"} Dec 05 08:11:34 crc kubenswrapper[4780]: I1205 08:11:34.035113 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.663349943 podStartE2EDuration="3.035091213s" podCreationTimestamp="2025-12-05 08:11:31 +0000 UTC" firstStartedPulling="2025-12-05 08:11:33.370644813 +0000 UTC m=+5127.440161155" lastFinishedPulling="2025-12-05 08:11:33.742386093 +0000 UTC m=+5127.811902425" observedRunningTime="2025-12-05 08:11:34.031483065 +0000 UTC m=+5128.100999447" watchObservedRunningTime="2025-12-05 08:11:34.035091213 +0000 UTC m=+5128.104607545" Dec 05 08:11:35 crc kubenswrapper[4780]: I1205 08:11:35.419201 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:11:35 crc kubenswrapper[4780]: I1205 08:11:35.487691 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-778d75ccf7-8zt8b"] Dec 05 08:11:35 crc kubenswrapper[4780]: I1205 08:11:35.487940 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" podUID="ceee5562-3cff-4c1c-a163-d40e75cacedb" containerName="dnsmasq-dns" containerID="cri-o://fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b" gracePeriod=10 Dec 05 08:11:35 crc kubenswrapper[4780]: I1205 08:11:35.945839 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.026957 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-config\") pod \"ceee5562-3cff-4c1c-a163-d40e75cacedb\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.027004 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-dns-svc\") pod \"ceee5562-3cff-4c1c-a163-d40e75cacedb\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.027117 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzrfg\" (UniqueName: \"kubernetes.io/projected/ceee5562-3cff-4c1c-a163-d40e75cacedb-kube-api-access-bzrfg\") pod \"ceee5562-3cff-4c1c-a163-d40e75cacedb\" (UID: \"ceee5562-3cff-4c1c-a163-d40e75cacedb\") " Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.034152 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceee5562-3cff-4c1c-a163-d40e75cacedb-kube-api-access-bzrfg" (OuterVolumeSpecName: "kube-api-access-bzrfg") pod "ceee5562-3cff-4c1c-a163-d40e75cacedb" (UID: "ceee5562-3cff-4c1c-a163-d40e75cacedb"). InnerVolumeSpecName "kube-api-access-bzrfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.036075 4780 generic.go:334] "Generic (PLEG): container finished" podID="ceee5562-3cff-4c1c-a163-d40e75cacedb" containerID="fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b" exitCode=0 Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.036125 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" event={"ID":"ceee5562-3cff-4c1c-a163-d40e75cacedb","Type":"ContainerDied","Data":"fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b"} Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.036157 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" event={"ID":"ceee5562-3cff-4c1c-a163-d40e75cacedb","Type":"ContainerDied","Data":"6c88af2d1e5d6b29c80179376c7e88c85d9460ac95e3123bcf0f9997283286b6"} Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.036180 4780 scope.go:117] "RemoveContainer" containerID="fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.036395 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-778d75ccf7-8zt8b" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.082704 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-config" (OuterVolumeSpecName: "config") pod "ceee5562-3cff-4c1c-a163-d40e75cacedb" (UID: "ceee5562-3cff-4c1c-a163-d40e75cacedb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.083364 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ceee5562-3cff-4c1c-a163-d40e75cacedb" (UID: "ceee5562-3cff-4c1c-a163-d40e75cacedb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.110131 4780 scope.go:117] "RemoveContainer" containerID="03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.125712 4780 scope.go:117] "RemoveContainer" containerID="fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b" Dec 05 08:11:36 crc kubenswrapper[4780]: E1205 08:11:36.126097 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b\": container with ID starting with fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b not found: ID does not exist" containerID="fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.126184 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b"} err="failed to get container status \"fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b\": rpc error: code = NotFound desc = could not find container \"fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b\": container with ID starting with fbdc887249587dab81ab8ab6c82f5cc497c1b4be9ebd0363a7f9e17007fe2d8b not found: ID does not exist" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.126257 4780 scope.go:117] "RemoveContainer" containerID="03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d" Dec 05 08:11:36 crc kubenswrapper[4780]: E1205 08:11:36.126686 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d\": container with ID starting with 03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d not found: ID does not exist" containerID="03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.126710 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d"} err="failed to get container status \"03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d\": rpc error: code = NotFound desc = could not find container \"03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d\": container with ID starting with 03df81428632c062d6c918c0071ec3400d48a4de462207214d39eb663dc7f81d not found: ID does not exist" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.128719 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzrfg\" (UniqueName: \"kubernetes.io/projected/ceee5562-3cff-4c1c-a163-d40e75cacedb-kube-api-access-bzrfg\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.128798 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.128863 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceee5562-3cff-4c1c-a163-d40e75cacedb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.360357 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-778d75ccf7-8zt8b"] Dec 05 08:11:36 crc kubenswrapper[4780]: I1205 08:11:36.367240 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-778d75ccf7-8zt8b"] Dec 05 08:11:38 crc kubenswrapper[4780]: I1205 08:11:38.147626 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceee5562-3cff-4c1c-a163-d40e75cacedb" path="/var/lib/kubelet/pods/ceee5562-3cff-4c1c-a163-d40e75cacedb/volumes" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.111379 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 05 08:11:39 crc kubenswrapper[4780]: E1205 08:11:39.111811 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceee5562-3cff-4c1c-a163-d40e75cacedb" containerName="init" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.111836 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceee5562-3cff-4c1c-a163-d40e75cacedb" containerName="init" Dec 05 08:11:39 crc kubenswrapper[4780]: E1205 08:11:39.111862 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceee5562-3cff-4c1c-a163-d40e75cacedb" containerName="dnsmasq-dns" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.111872 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceee5562-3cff-4c1c-a163-d40e75cacedb" containerName="dnsmasq-dns" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.112125 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceee5562-3cff-4c1c-a163-d40e75cacedb" containerName="dnsmasq-dns" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.113235 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.120095 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.120265 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wtbtp" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.120977 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.121043 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.135159 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.277859 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b6ee248-3deb-4c44-962b-6c6e174c9b68-config\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.278008 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6ee248-3deb-4c44-962b-6c6e174c9b68-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.278044 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b6ee248-3deb-4c44-962b-6c6e174c9b68-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.278349 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6ee248-3deb-4c44-962b-6c6e174c9b68-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.278455 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngq9q\" (UniqueName: \"kubernetes.io/projected/1b6ee248-3deb-4c44-962b-6c6e174c9b68-kube-api-access-ngq9q\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.278635 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b6ee248-3deb-4c44-962b-6c6e174c9b68-scripts\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.278687 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6ee248-3deb-4c44-962b-6c6e174c9b68-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.380051 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b6ee248-3deb-4c44-962b-6c6e174c9b68-scripts\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.380394 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6ee248-3deb-4c44-962b-6c6e174c9b68-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.380442 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b6ee248-3deb-4c44-962b-6c6e174c9b68-config\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.380477 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6ee248-3deb-4c44-962b-6c6e174c9b68-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.380514 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b6ee248-3deb-4c44-962b-6c6e174c9b68-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.380551 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6ee248-3deb-4c44-962b-6c6e174c9b68-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.380572 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngq9q\" (UniqueName: \"kubernetes.io/projected/1b6ee248-3deb-4c44-962b-6c6e174c9b68-kube-api-access-ngq9q\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.381149 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b6ee248-3deb-4c44-962b-6c6e174c9b68-scripts\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.381258 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b6ee248-3deb-4c44-962b-6c6e174c9b68-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.381830 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b6ee248-3deb-4c44-962b-6c6e174c9b68-config\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.387334 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6ee248-3deb-4c44-962b-6c6e174c9b68-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.388486 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6ee248-3deb-4c44-962b-6c6e174c9b68-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.391119 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6ee248-3deb-4c44-962b-6c6e174c9b68-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.401526 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngq9q\" (UniqueName: \"kubernetes.io/projected/1b6ee248-3deb-4c44-962b-6c6e174c9b68-kube-api-access-ngq9q\") pod \"ovn-northd-0\" (UID: \"1b6ee248-3deb-4c44-962b-6c6e174c9b68\") " pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.444210 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 08:11:39 crc kubenswrapper[4780]: I1205 08:11:39.918610 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 08:11:40 crc kubenswrapper[4780]: I1205 08:11:40.089756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b6ee248-3deb-4c44-962b-6c6e174c9b68","Type":"ContainerStarted","Data":"6ab4ea340a97a533ad58620dac2e6dcfe115a5e5211bf31da5c3ebbcd928a7e9"} Dec 05 08:11:41 crc kubenswrapper[4780]: I1205 08:11:41.103484 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b6ee248-3deb-4c44-962b-6c6e174c9b68","Type":"ContainerStarted","Data":"562d03bac2ce49b93157b5b3db9a7ab7472001e8fb899d4601ef82289afd46ee"} Dec 05 08:11:42 crc kubenswrapper[4780]: I1205 08:11:42.114706 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b6ee248-3deb-4c44-962b-6c6e174c9b68","Type":"ContainerStarted","Data":"6e78951cfec7468033916f674fceea8f6d467a64f4a067d4fed31b70668f0e59"} Dec 05 08:11:42 crc kubenswrapper[4780]: I1205 08:11:42.114920 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 05 08:11:42 crc kubenswrapper[4780]: I1205 08:11:42.153810 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.395652145 podStartE2EDuration="3.153778962s" podCreationTimestamp="2025-12-05 08:11:39 +0000 UTC" firstStartedPulling="2025-12-05 08:11:39.932630418 +0000 UTC m=+5134.002146750" lastFinishedPulling="2025-12-05 08:11:40.690757235 +0000 UTC m=+5134.760273567" observedRunningTime="2025-12-05 08:11:42.133481471 +0000 UTC m=+5136.202997803" watchObservedRunningTime="2025-12-05 08:11:42.153778962 +0000 UTC m=+5136.223295324" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.084842 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6429m"] Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.086694 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6429m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.098086 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6429m"] Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.182682 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2t7c\" (UniqueName: \"kubernetes.io/projected/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-kube-api-access-g2t7c\") pod \"keystone-db-create-6429m\" (UID: \"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa\") " pod="openstack/keystone-db-create-6429m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.182869 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-operator-scripts\") pod \"keystone-db-create-6429m\" (UID: \"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa\") " pod="openstack/keystone-db-create-6429m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.185434 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-37db-account-create-update-2qq4m"] Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.187107 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-37db-account-create-update-2qq4m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.191836 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.197901 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-37db-account-create-update-2qq4m"] Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.284487 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-operator-scripts\") pod \"keystone-db-create-6429m\" (UID: \"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa\") " pod="openstack/keystone-db-create-6429m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.284565 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvsj\" (UniqueName: \"kubernetes.io/projected/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-kube-api-access-pzvsj\") pod \"keystone-37db-account-create-update-2qq4m\" (UID: \"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e\") " pod="openstack/keystone-37db-account-create-update-2qq4m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.284599 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-operator-scripts\") pod \"keystone-37db-account-create-update-2qq4m\" (UID: \"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e\") " pod="openstack/keystone-37db-account-create-update-2qq4m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.284792 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2t7c\" (UniqueName: \"kubernetes.io/projected/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-kube-api-access-g2t7c\") pod \"keystone-db-create-6429m\" (UID: \"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa\") " pod="openstack/keystone-db-create-6429m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.285448 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-operator-scripts\") pod \"keystone-db-create-6429m\" (UID: \"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa\") " pod="openstack/keystone-db-create-6429m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.307461 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2t7c\" (UniqueName: \"kubernetes.io/projected/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-kube-api-access-g2t7c\") pod \"keystone-db-create-6429m\" (UID: \"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa\") " pod="openstack/keystone-db-create-6429m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.386973 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzvsj\" (UniqueName: \"kubernetes.io/projected/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-kube-api-access-pzvsj\") pod \"keystone-37db-account-create-update-2qq4m\" (UID: \"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e\") " pod="openstack/keystone-37db-account-create-update-2qq4m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.387117 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-operator-scripts\") pod \"keystone-37db-account-create-update-2qq4m\" (UID: \"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e\") " pod="openstack/keystone-37db-account-create-update-2qq4m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.388028 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-operator-scripts\") pod \"keystone-37db-account-create-update-2qq4m\" (UID: \"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e\") " pod="openstack/keystone-37db-account-create-update-2qq4m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.405316 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzvsj\" (UniqueName: \"kubernetes.io/projected/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-kube-api-access-pzvsj\") pod \"keystone-37db-account-create-update-2qq4m\" (UID: \"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e\") " pod="openstack/keystone-37db-account-create-update-2qq4m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.409523 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6429m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.515115 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-37db-account-create-update-2qq4m" Dec 05 08:11:44 crc kubenswrapper[4780]: I1205 08:11:44.897308 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6429m"] Dec 05 08:11:45 crc kubenswrapper[4780]: I1205 08:11:45.001507 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-37db-account-create-update-2qq4m"] Dec 05 08:11:45 crc kubenswrapper[4780]: W1205 08:11:45.005071 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b9fa951_fe33_4e7e_8f01_8bd63e78cf8e.slice/crio-e86af397b08fb0ade3a74eba52972db67bad1d0a9ef98777b5f3d8aff61982c3 WatchSource:0}: Error finding container e86af397b08fb0ade3a74eba52972db67bad1d0a9ef98777b5f3d8aff61982c3: Status 404 returned error can't find the container with id e86af397b08fb0ade3a74eba52972db67bad1d0a9ef98777b5f3d8aff61982c3 Dec 05 08:11:45 crc kubenswrapper[4780]: I1205 08:11:45.143477 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-37db-account-create-update-2qq4m" event={"ID":"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e","Type":"ContainerStarted","Data":"f77ee4ba9be3082c9bba20e309716e16d5dde91c3afc1f1f83d9897278314516"} Dec 05 08:11:45 crc kubenswrapper[4780]: I1205 08:11:45.143541 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-37db-account-create-update-2qq4m" event={"ID":"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e","Type":"ContainerStarted","Data":"e86af397b08fb0ade3a74eba52972db67bad1d0a9ef98777b5f3d8aff61982c3"} Dec 05 08:11:45 crc kubenswrapper[4780]: I1205 08:11:45.146257 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6429m" event={"ID":"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa","Type":"ContainerStarted","Data":"177a6d11dbf916bc5a213be71e0fa8f74f807e6e93df99e5b13abf35b8af77db"} Dec 05 08:11:45 crc kubenswrapper[4780]: I1205 08:11:45.146289 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6429m" event={"ID":"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa","Type":"ContainerStarted","Data":"c032472258d626574b01e474fb23a94b8dd602e2ea6e4f42c2b471e55972c7db"} Dec 05 08:11:45 crc kubenswrapper[4780]: I1205 08:11:45.163720 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-37db-account-create-update-2qq4m" podStartSLOduration=1.163702768 podStartE2EDuration="1.163702768s" podCreationTimestamp="2025-12-05 08:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:11:45.157678585 +0000 UTC m=+5139.227194927" watchObservedRunningTime="2025-12-05 08:11:45.163702768 +0000 UTC m=+5139.233219100" Dec 05 08:11:45 crc kubenswrapper[4780]: I1205 08:11:45.181222 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-6429m" podStartSLOduration=1.181205644 podStartE2EDuration="1.181205644s" podCreationTimestamp="2025-12-05 08:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:11:45.177096822 +0000 UTC m=+5139.246613164" watchObservedRunningTime="2025-12-05 08:11:45.181205644 +0000 UTC m=+5139.250721976" Dec 05 08:11:46 crc kubenswrapper[4780]: I1205 08:11:46.156560 4780 generic.go:334] "Generic (PLEG): container finished" podID="c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa" containerID="177a6d11dbf916bc5a213be71e0fa8f74f807e6e93df99e5b13abf35b8af77db" exitCode=0 Dec 05 08:11:46 crc kubenswrapper[4780]: I1205 08:11:46.159140 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6429m" event={"ID":"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa","Type":"ContainerDied","Data":"177a6d11dbf916bc5a213be71e0fa8f74f807e6e93df99e5b13abf35b8af77db"} Dec 05 08:11:46 crc kubenswrapper[4780]: I1205 08:11:46.162215 4780 generic.go:334] "Generic (PLEG): container finished" podID="7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e" containerID="f77ee4ba9be3082c9bba20e309716e16d5dde91c3afc1f1f83d9897278314516" exitCode=0 Dec 05 08:11:46 crc kubenswrapper[4780]: I1205 08:11:46.162305 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-37db-account-create-update-2qq4m" event={"ID":"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e","Type":"ContainerDied","Data":"f77ee4ba9be3082c9bba20e309716e16d5dde91c3afc1f1f83d9897278314516"} Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.722781 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6429m" Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.728938 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-37db-account-create-update-2qq4m" Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.871384 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-operator-scripts\") pod \"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e\" (UID: \"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e\") " Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.871536 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2t7c\" (UniqueName: \"kubernetes.io/projected/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-kube-api-access-g2t7c\") pod \"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa\" (UID: \"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa\") " Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.871557 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-operator-scripts\") pod \"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa\" (UID: \"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa\") " Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.871623 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzvsj\" (UniqueName: \"kubernetes.io/projected/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-kube-api-access-pzvsj\") pod \"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e\" (UID: \"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e\") " Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.872605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e" (UID: "7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.873381 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa" (UID: "c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.878811 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-kube-api-access-pzvsj" (OuterVolumeSpecName: "kube-api-access-pzvsj") pod "7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e" (UID: "7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e"). InnerVolumeSpecName "kube-api-access-pzvsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.880232 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-kube-api-access-g2t7c" (OuterVolumeSpecName: "kube-api-access-g2t7c") pod "c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa" (UID: "c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa"). InnerVolumeSpecName "kube-api-access-g2t7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.973393 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.973437 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2t7c\" (UniqueName: \"kubernetes.io/projected/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-kube-api-access-g2t7c\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.973451 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:47 crc kubenswrapper[4780]: I1205 08:11:47.973464 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzvsj\" (UniqueName: \"kubernetes.io/projected/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e-kube-api-access-pzvsj\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:48 crc kubenswrapper[4780]: I1205 08:11:48.196269 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6429m" Dec 05 08:11:48 crc kubenswrapper[4780]: I1205 08:11:48.196260 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6429m" event={"ID":"c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa","Type":"ContainerDied","Data":"c032472258d626574b01e474fb23a94b8dd602e2ea6e4f42c2b471e55972c7db"} Dec 05 08:11:48 crc kubenswrapper[4780]: I1205 08:11:48.196397 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c032472258d626574b01e474fb23a94b8dd602e2ea6e4f42c2b471e55972c7db" Dec 05 08:11:48 crc kubenswrapper[4780]: I1205 08:11:48.198995 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-37db-account-create-update-2qq4m" event={"ID":"7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e","Type":"ContainerDied","Data":"e86af397b08fb0ade3a74eba52972db67bad1d0a9ef98777b5f3d8aff61982c3"} Dec 05 08:11:48 crc kubenswrapper[4780]: I1205 08:11:48.199021 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e86af397b08fb0ade3a74eba52972db67bad1d0a9ef98777b5f3d8aff61982c3" Dec 05 08:11:48 crc kubenswrapper[4780]: I1205 08:11:48.199085 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-37db-account-create-update-2qq4m" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.700467 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-np2gw"] Dec 05 08:11:49 crc kubenswrapper[4780]: E1205 08:11:49.701373 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa" containerName="mariadb-database-create" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.701394 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa" containerName="mariadb-database-create" Dec 05 08:11:49 crc kubenswrapper[4780]: E1205 08:11:49.701416 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e" containerName="mariadb-account-create-update" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.701423 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e" containerName="mariadb-account-create-update" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.701626 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa" containerName="mariadb-database-create" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.701644 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e" containerName="mariadb-account-create-update" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.702352 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.709777 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vqdl6" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.710247 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.710492 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.711289 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.725806 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-np2gw"] Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.805547 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-combined-ca-bundle\") pod \"keystone-db-sync-np2gw\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.805652 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-config-data\") pod \"keystone-db-sync-np2gw\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.805674 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v469m\" (UniqueName: \"kubernetes.io/projected/4b355fa2-1090-4e02-b17d-323cd82a2b06-kube-api-access-v469m\") pod \"keystone-db-sync-np2gw\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.907448 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-config-data\") pod \"keystone-db-sync-np2gw\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.907494 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v469m\" (UniqueName: \"kubernetes.io/projected/4b355fa2-1090-4e02-b17d-323cd82a2b06-kube-api-access-v469m\") pod \"keystone-db-sync-np2gw\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.907605 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-combined-ca-bundle\") pod \"keystone-db-sync-np2gw\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.912555 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-config-data\") pod \"keystone-db-sync-np2gw\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.913101 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-combined-ca-bundle\") pod \"keystone-db-sync-np2gw\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:49 crc kubenswrapper[4780]: I1205 08:11:49.931623 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v469m\" (UniqueName: \"kubernetes.io/projected/4b355fa2-1090-4e02-b17d-323cd82a2b06-kube-api-access-v469m\") pod \"keystone-db-sync-np2gw\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:50 crc kubenswrapper[4780]: I1205 08:11:50.052695 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:50 crc kubenswrapper[4780]: I1205 08:11:50.470655 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-np2gw"] Dec 05 08:11:51 crc kubenswrapper[4780]: I1205 08:11:51.219970 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-np2gw" event={"ID":"4b355fa2-1090-4e02-b17d-323cd82a2b06","Type":"ContainerStarted","Data":"93b22845aa4e1f665c0e58c4fbd430f329a18e51fc6db5156c01de2547ac3d3f"} Dec 05 08:11:54 crc kubenswrapper[4780]: I1205 08:11:54.510157 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 05 08:11:55 crc kubenswrapper[4780]: I1205 08:11:55.260067 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-np2gw" event={"ID":"4b355fa2-1090-4e02-b17d-323cd82a2b06","Type":"ContainerStarted","Data":"4de9adec3f660ea4ee648b9800ae676f6fbe54b9d570f7d145962723f3a20668"} Dec 05 08:11:55 crc kubenswrapper[4780]: I1205 08:11:55.284113 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-np2gw" podStartSLOduration=1.740592993 podStartE2EDuration="6.284091385s" podCreationTimestamp="2025-12-05 08:11:49 +0000 UTC" firstStartedPulling="2025-12-05 08:11:50.47461213 +0000 UTC m=+5144.544128462" lastFinishedPulling="2025-12-05 08:11:55.018110532 +0000 UTC m=+5149.087626854" observedRunningTime="2025-12-05 08:11:55.279306796 +0000 UTC m=+5149.348823158" watchObservedRunningTime="2025-12-05 08:11:55.284091385 +0000 UTC m=+5149.353607717" Dec 05 08:11:57 crc kubenswrapper[4780]: I1205 08:11:57.278612 4780 generic.go:334] "Generic (PLEG): container finished" podID="4b355fa2-1090-4e02-b17d-323cd82a2b06" containerID="4de9adec3f660ea4ee648b9800ae676f6fbe54b9d570f7d145962723f3a20668" exitCode=0 Dec 05 08:11:57 crc kubenswrapper[4780]: I1205 08:11:57.278661 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-np2gw" event={"ID":"4b355fa2-1090-4e02-b17d-323cd82a2b06","Type":"ContainerDied","Data":"4de9adec3f660ea4ee648b9800ae676f6fbe54b9d570f7d145962723f3a20668"} Dec 05 08:11:58 crc kubenswrapper[4780]: I1205 08:11:58.656635 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:58 crc kubenswrapper[4780]: I1205 08:11:58.795150 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-combined-ca-bundle\") pod \"4b355fa2-1090-4e02-b17d-323cd82a2b06\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " Dec 05 08:11:58 crc kubenswrapper[4780]: I1205 08:11:58.795514 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v469m\" (UniqueName: \"kubernetes.io/projected/4b355fa2-1090-4e02-b17d-323cd82a2b06-kube-api-access-v469m\") pod \"4b355fa2-1090-4e02-b17d-323cd82a2b06\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " Dec 05 08:11:58 crc kubenswrapper[4780]: I1205 08:11:58.795622 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-config-data\") pod \"4b355fa2-1090-4e02-b17d-323cd82a2b06\" (UID: \"4b355fa2-1090-4e02-b17d-323cd82a2b06\") " Dec 05 08:11:58 crc kubenswrapper[4780]: I1205 08:11:58.801124 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b355fa2-1090-4e02-b17d-323cd82a2b06-kube-api-access-v469m" (OuterVolumeSpecName: "kube-api-access-v469m") pod "4b355fa2-1090-4e02-b17d-323cd82a2b06" (UID: "4b355fa2-1090-4e02-b17d-323cd82a2b06"). InnerVolumeSpecName "kube-api-access-v469m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:11:58 crc kubenswrapper[4780]: I1205 08:11:58.819415 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b355fa2-1090-4e02-b17d-323cd82a2b06" (UID: "4b355fa2-1090-4e02-b17d-323cd82a2b06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:11:58 crc kubenswrapper[4780]: I1205 08:11:58.838395 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-config-data" (OuterVolumeSpecName: "config-data") pod "4b355fa2-1090-4e02-b17d-323cd82a2b06" (UID: "4b355fa2-1090-4e02-b17d-323cd82a2b06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:11:58 crc kubenswrapper[4780]: I1205 08:11:58.897122 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:58 crc kubenswrapper[4780]: I1205 08:11:58.897164 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v469m\" (UniqueName: \"kubernetes.io/projected/4b355fa2-1090-4e02-b17d-323cd82a2b06-kube-api-access-v469m\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:58 crc kubenswrapper[4780]: I1205 08:11:58.897183 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b355fa2-1090-4e02-b17d-323cd82a2b06-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.295346 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-np2gw" event={"ID":"4b355fa2-1090-4e02-b17d-323cd82a2b06","Type":"ContainerDied","Data":"93b22845aa4e1f665c0e58c4fbd430f329a18e51fc6db5156c01de2547ac3d3f"} Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.295396 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93b22845aa4e1f665c0e58c4fbd430f329a18e51fc6db5156c01de2547ac3d3f" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.295434 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-np2gw" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.486794 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jsvrz"] Dec 05 08:11:59 crc kubenswrapper[4780]: E1205 08:11:59.487156 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b355fa2-1090-4e02-b17d-323cd82a2b06" containerName="keystone-db-sync" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.487167 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b355fa2-1090-4e02-b17d-323cd82a2b06" containerName="keystone-db-sync" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.488981 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b355fa2-1090-4e02-b17d-323cd82a2b06" containerName="keystone-db-sync" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.489638 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.491577 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.492247 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.492341 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.492627 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.492953 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vqdl6" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.516253 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jsvrz"] Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.544288 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b9996877c-5tpmc"] Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.546252 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.565431 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b9996877c-5tpmc"] Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.610248 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-config-data\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.610318 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-credential-keys\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.610407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-scripts\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.610427 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-combined-ca-bundle\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.610446 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-fernet-keys\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.610465 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjsjs\" (UniqueName: \"kubernetes.io/projected/34ff7480-2d14-4fb3-bb9b-f5353b30d599-kube-api-access-tjsjs\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.712691 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-config-data\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.713210 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-credential-keys\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.713245 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-config\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.713309 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-dns-svc\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.713365 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-scripts\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.713385 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-combined-ca-bundle\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.713414 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7fdv\" (UniqueName: \"kubernetes.io/projected/7ca16f1c-cae0-481e-b008-973becf7fc55-kube-api-access-j7fdv\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.713435 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-fernet-keys\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.713454 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-nb\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.713471 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjsjs\" (UniqueName: \"kubernetes.io/projected/34ff7480-2d14-4fb3-bb9b-f5353b30d599-kube-api-access-tjsjs\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.713487 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-sb\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.718732 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-credential-keys\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.720058 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-scripts\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.720157 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-fernet-keys\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.723377 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-combined-ca-bundle\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.732130 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-config-data\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.740186 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjsjs\" (UniqueName: \"kubernetes.io/projected/34ff7480-2d14-4fb3-bb9b-f5353b30d599-kube-api-access-tjsjs\") pod \"keystone-bootstrap-jsvrz\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.814854 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-dns-svc\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.814987 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7fdv\" (UniqueName: \"kubernetes.io/projected/7ca16f1c-cae0-481e-b008-973becf7fc55-kube-api-access-j7fdv\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.815013 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-nb\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.815036 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-sb\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.815114 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-config\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.816199 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-config\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.816207 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-dns-svc\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.816734 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-nb\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.816990 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-sb\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.830720 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.847795 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7fdv\" (UniqueName: \"kubernetes.io/projected/7ca16f1c-cae0-481e-b008-973becf7fc55-kube-api-access-j7fdv\") pod \"dnsmasq-dns-5b9996877c-5tpmc\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:11:59 crc kubenswrapper[4780]: I1205 08:11:59.866100 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:12:00 crc kubenswrapper[4780]: I1205 08:12:00.365860 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jsvrz"] Dec 05 08:12:00 crc kubenswrapper[4780]: W1205 08:12:00.371119 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ff7480_2d14_4fb3_bb9b_f5353b30d599.slice/crio-24b47c9e15d5151cfc130541590f914713d2047d98ccf3f12d72bae06f90f48b WatchSource:0}: Error finding container 24b47c9e15d5151cfc130541590f914713d2047d98ccf3f12d72bae06f90f48b: Status 404 returned error can't find the container with id 24b47c9e15d5151cfc130541590f914713d2047d98ccf3f12d72bae06f90f48b Dec 05 08:12:00 crc kubenswrapper[4780]: W1205 08:12:00.432187 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ca16f1c_cae0_481e_b008_973becf7fc55.slice/crio-07a99706da1ab2c1ace5de5449add68fb2f4b97d49afa7bb35dff02ec9ed28d1 WatchSource:0}: Error finding container 07a99706da1ab2c1ace5de5449add68fb2f4b97d49afa7bb35dff02ec9ed28d1: Status 404 returned error can't find the container with id 07a99706da1ab2c1ace5de5449add68fb2f4b97d49afa7bb35dff02ec9ed28d1 Dec 05 08:12:00 crc kubenswrapper[4780]: I1205 08:12:00.436803 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b9996877c-5tpmc"] Dec 05 08:12:01 crc kubenswrapper[4780]: I1205 08:12:01.317386 4780 generic.go:334] "Generic (PLEG): container finished" podID="7ca16f1c-cae0-481e-b008-973becf7fc55" containerID="85d440db25cf8567a031874d404f988a05477c2e5c5eab2382fb27fbdb895283" exitCode=0 Dec 05 08:12:01 crc kubenswrapper[4780]: I1205 08:12:01.317748 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" event={"ID":"7ca16f1c-cae0-481e-b008-973becf7fc55","Type":"ContainerDied","Data":"85d440db25cf8567a031874d404f988a05477c2e5c5eab2382fb27fbdb895283"} Dec 05 08:12:01 crc kubenswrapper[4780]: I1205 08:12:01.317784 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" event={"ID":"7ca16f1c-cae0-481e-b008-973becf7fc55","Type":"ContainerStarted","Data":"07a99706da1ab2c1ace5de5449add68fb2f4b97d49afa7bb35dff02ec9ed28d1"} Dec 05 08:12:01 crc kubenswrapper[4780]: I1205 08:12:01.322203 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsvrz" event={"ID":"34ff7480-2d14-4fb3-bb9b-f5353b30d599","Type":"ContainerStarted","Data":"71c19e5abf6001170aedf68a64a856431e1dbfdb327c244ab94ad3cc223ad705"} Dec 05 08:12:01 crc kubenswrapper[4780]: I1205 08:12:01.322265 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsvrz" event={"ID":"34ff7480-2d14-4fb3-bb9b-f5353b30d599","Type":"ContainerStarted","Data":"24b47c9e15d5151cfc130541590f914713d2047d98ccf3f12d72bae06f90f48b"} Dec 05 08:12:01 crc kubenswrapper[4780]: I1205 08:12:01.374490 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jsvrz" podStartSLOduration=2.374458576 podStartE2EDuration="2.374458576s" podCreationTimestamp="2025-12-05 08:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:12:01.372829291 +0000 UTC m=+5155.442345633" watchObservedRunningTime="2025-12-05 08:12:01.374458576 +0000 UTC m=+5155.443974918" Dec 05 08:12:02 crc kubenswrapper[4780]: I1205 08:12:02.336753 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" event={"ID":"7ca16f1c-cae0-481e-b008-973becf7fc55","Type":"ContainerStarted","Data":"03e3623c90726183dc6a8ec9d5cd3accb786ce3f2d84401fdd2fa8d72078270e"} Dec 05 08:12:02 crc kubenswrapper[4780]: I1205 08:12:02.371292 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" podStartSLOduration=3.371275304 podStartE2EDuration="3.371275304s" podCreationTimestamp="2025-12-05 08:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:12:02.366520804 +0000 UTC m=+5156.436037146" watchObservedRunningTime="2025-12-05 08:12:02.371275304 +0000 UTC m=+5156.440791636" Dec 05 08:12:03 crc kubenswrapper[4780]: I1205 08:12:03.343949 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:12:04 crc kubenswrapper[4780]: I1205 08:12:04.355226 4780 generic.go:334] "Generic (PLEG): container finished" podID="34ff7480-2d14-4fb3-bb9b-f5353b30d599" containerID="71c19e5abf6001170aedf68a64a856431e1dbfdb327c244ab94ad3cc223ad705" exitCode=0 Dec 05 08:12:04 crc kubenswrapper[4780]: I1205 08:12:04.355263 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsvrz" event={"ID":"34ff7480-2d14-4fb3-bb9b-f5353b30d599","Type":"ContainerDied","Data":"71c19e5abf6001170aedf68a64a856431e1dbfdb327c244ab94ad3cc223ad705"} Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.732306 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.933518 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-scripts\") pod \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.933592 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-credential-keys\") pod \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.933659 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-fernet-keys\") pod \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.933682 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-config-data\") pod \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.933731 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjsjs\" (UniqueName: \"kubernetes.io/projected/34ff7480-2d14-4fb3-bb9b-f5353b30d599-kube-api-access-tjsjs\") pod \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.933764 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-combined-ca-bundle\") pod \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\" (UID: \"34ff7480-2d14-4fb3-bb9b-f5353b30d599\") " Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.940806 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "34ff7480-2d14-4fb3-bb9b-f5353b30d599" (UID: "34ff7480-2d14-4fb3-bb9b-f5353b30d599"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.940997 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-scripts" (OuterVolumeSpecName: "scripts") pod "34ff7480-2d14-4fb3-bb9b-f5353b30d599" (UID: "34ff7480-2d14-4fb3-bb9b-f5353b30d599"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.941017 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ff7480-2d14-4fb3-bb9b-f5353b30d599-kube-api-access-tjsjs" (OuterVolumeSpecName: "kube-api-access-tjsjs") pod "34ff7480-2d14-4fb3-bb9b-f5353b30d599" (UID: "34ff7480-2d14-4fb3-bb9b-f5353b30d599"). InnerVolumeSpecName "kube-api-access-tjsjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.941963 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "34ff7480-2d14-4fb3-bb9b-f5353b30d599" (UID: "34ff7480-2d14-4fb3-bb9b-f5353b30d599"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.967122 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34ff7480-2d14-4fb3-bb9b-f5353b30d599" (UID: "34ff7480-2d14-4fb3-bb9b-f5353b30d599"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:12:05 crc kubenswrapper[4780]: I1205 08:12:05.970116 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-config-data" (OuterVolumeSpecName: "config-data") pod "34ff7480-2d14-4fb3-bb9b-f5353b30d599" (UID: "34ff7480-2d14-4fb3-bb9b-f5353b30d599"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.036616 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.036760 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.036842 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjsjs\" (UniqueName: \"kubernetes.io/projected/34ff7480-2d14-4fb3-bb9b-f5353b30d599-kube-api-access-tjsjs\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.036966 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.037054 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.037129 4780 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34ff7480-2d14-4fb3-bb9b-f5353b30d599-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.377349 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jsvrz" event={"ID":"34ff7480-2d14-4fb3-bb9b-f5353b30d599","Type":"ContainerDied","Data":"24b47c9e15d5151cfc130541590f914713d2047d98ccf3f12d72bae06f90f48b"} Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.377398 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24b47c9e15d5151cfc130541590f914713d2047d98ccf3f12d72bae06f90f48b" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.377437 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jsvrz" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.442341 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jsvrz"] Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.448587 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jsvrz"] Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.558371 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-69nl5"] Dec 05 08:12:06 crc kubenswrapper[4780]: E1205 08:12:06.558762 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ff7480-2d14-4fb3-bb9b-f5353b30d599" containerName="keystone-bootstrap" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.558780 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ff7480-2d14-4fb3-bb9b-f5353b30d599" containerName="keystone-bootstrap" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.559022 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ff7480-2d14-4fb3-bb9b-f5353b30d599" containerName="keystone-bootstrap" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.559709 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.562558 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vqdl6" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.562562 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.562724 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.562910 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.563757 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.574019 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-69nl5"] Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.754679 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqtjx\" (UniqueName: \"kubernetes.io/projected/242f5b62-3d4e-4de1-ba14-3a3acce4a455-kube-api-access-rqtjx\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.754754 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-credential-keys\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.754792 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-scripts\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.754807 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-config-data\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.754858 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-combined-ca-bundle\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.755603 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-fernet-keys\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.857483 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-credential-keys\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.857564 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-scripts\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.857588 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-config-data\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.857651 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-combined-ca-bundle\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.857683 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-fernet-keys\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.857772 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqtjx\" (UniqueName: \"kubernetes.io/projected/242f5b62-3d4e-4de1-ba14-3a3acce4a455-kube-api-access-rqtjx\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.861997 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-fernet-keys\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.862272 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-scripts\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.862474 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-config-data\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.862681 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-credential-keys\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.863544 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-combined-ca-bundle\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.875659 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqtjx\" (UniqueName: \"kubernetes.io/projected/242f5b62-3d4e-4de1-ba14-3a3acce4a455-kube-api-access-rqtjx\") pod \"keystone-bootstrap-69nl5\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:06 crc kubenswrapper[4780]: I1205 08:12:06.883558 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:07 crc kubenswrapper[4780]: I1205 08:12:07.368671 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-69nl5"] Dec 05 08:12:07 crc kubenswrapper[4780]: I1205 08:12:07.390399 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-69nl5" event={"ID":"242f5b62-3d4e-4de1-ba14-3a3acce4a455","Type":"ContainerStarted","Data":"f5b19f44ea9cf9d0ed6710bd0407f3e1a4594858f4a334297da61dea4d5a369c"} Dec 05 08:12:08 crc kubenswrapper[4780]: I1205 08:12:08.148778 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ff7480-2d14-4fb3-bb9b-f5353b30d599" path="/var/lib/kubelet/pods/34ff7480-2d14-4fb3-bb9b-f5353b30d599/volumes" Dec 05 08:12:08 crc kubenswrapper[4780]: I1205 08:12:08.400602 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-69nl5" event={"ID":"242f5b62-3d4e-4de1-ba14-3a3acce4a455","Type":"ContainerStarted","Data":"4e265fd94940ef981df6e88fa57f9417577253e469c4daa89e7a46ae80739f8f"} Dec 05 08:12:08 crc kubenswrapper[4780]: I1205 08:12:08.430666 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-69nl5" podStartSLOduration=2.43064496 podStartE2EDuration="2.43064496s" podCreationTimestamp="2025-12-05 08:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:12:08.422933111 +0000 UTC m=+5162.492449453" watchObservedRunningTime="2025-12-05 08:12:08.43064496 +0000 UTC m=+5162.500161292" Dec 05 08:12:09 crc kubenswrapper[4780]: I1205 08:12:09.868031 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:12:09 crc kubenswrapper[4780]: I1205 08:12:09.933908 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc8d6fdd7-9qnr6"] Dec 05 08:12:09 crc kubenswrapper[4780]: I1205 08:12:09.934313 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" podUID="cf447a04-4ae4-43f8-84ed-6fc92c2a6445" containerName="dnsmasq-dns" containerID="cri-o://822fec1d5af7ad72030d3fdd8273f0c3273d45643e6bf70ad162b906ebe51fc6" gracePeriod=10 Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.425539 4780 generic.go:334] "Generic (PLEG): container finished" podID="cf447a04-4ae4-43f8-84ed-6fc92c2a6445" containerID="822fec1d5af7ad72030d3fdd8273f0c3273d45643e6bf70ad162b906ebe51fc6" exitCode=0 Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.425613 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" event={"ID":"cf447a04-4ae4-43f8-84ed-6fc92c2a6445","Type":"ContainerDied","Data":"822fec1d5af7ad72030d3fdd8273f0c3273d45643e6bf70ad162b906ebe51fc6"} Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.426008 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" event={"ID":"cf447a04-4ae4-43f8-84ed-6fc92c2a6445","Type":"ContainerDied","Data":"3533eda012d92642f020bedfcdb5c6e3a237eb5458da4584955cb50de3f5c2df"} Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.426028 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3533eda012d92642f020bedfcdb5c6e3a237eb5458da4584955cb50de3f5c2df" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.436399 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.625816 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld2cm\" (UniqueName: \"kubernetes.io/projected/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-kube-api-access-ld2cm\") pod \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.626294 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-nb\") pod \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.626411 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-sb\") pod \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.626473 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-config\") pod \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.626557 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-dns-svc\") pod \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\" (UID: \"cf447a04-4ae4-43f8-84ed-6fc92c2a6445\") " Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.631297 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-kube-api-access-ld2cm" (OuterVolumeSpecName: "kube-api-access-ld2cm") pod "cf447a04-4ae4-43f8-84ed-6fc92c2a6445" (UID: "cf447a04-4ae4-43f8-84ed-6fc92c2a6445"). InnerVolumeSpecName "kube-api-access-ld2cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.668673 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf447a04-4ae4-43f8-84ed-6fc92c2a6445" (UID: "cf447a04-4ae4-43f8-84ed-6fc92c2a6445"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.669421 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf447a04-4ae4-43f8-84ed-6fc92c2a6445" (UID: "cf447a04-4ae4-43f8-84ed-6fc92c2a6445"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.671273 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-config" (OuterVolumeSpecName: "config") pod "cf447a04-4ae4-43f8-84ed-6fc92c2a6445" (UID: "cf447a04-4ae4-43f8-84ed-6fc92c2a6445"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.672192 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf447a04-4ae4-43f8-84ed-6fc92c2a6445" (UID: "cf447a04-4ae4-43f8-84ed-6fc92c2a6445"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.728680 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.728717 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.728727 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.728737 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:10 crc kubenswrapper[4780]: I1205 08:12:10.728746 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld2cm\" (UniqueName: \"kubernetes.io/projected/cf447a04-4ae4-43f8-84ed-6fc92c2a6445-kube-api-access-ld2cm\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:11 crc kubenswrapper[4780]: I1205 08:12:11.434685 4780 generic.go:334] "Generic (PLEG): container finished" podID="242f5b62-3d4e-4de1-ba14-3a3acce4a455" containerID="4e265fd94940ef981df6e88fa57f9417577253e469c4daa89e7a46ae80739f8f" exitCode=0 Dec 05 08:12:11 crc kubenswrapper[4780]: I1205 08:12:11.434730 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-69nl5" event={"ID":"242f5b62-3d4e-4de1-ba14-3a3acce4a455","Type":"ContainerDied","Data":"4e265fd94940ef981df6e88fa57f9417577253e469c4daa89e7a46ae80739f8f"} Dec 05 08:12:11 crc kubenswrapper[4780]: I1205 08:12:11.434762 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" Dec 05 08:12:11 crc kubenswrapper[4780]: I1205 08:12:11.470508 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc8d6fdd7-9qnr6"] Dec 05 08:12:11 crc kubenswrapper[4780]: I1205 08:12:11.477043 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bc8d6fdd7-9qnr6"] Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.150659 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf447a04-4ae4-43f8-84ed-6fc92c2a6445" path="/var/lib/kubelet/pods/cf447a04-4ae4-43f8-84ed-6fc92c2a6445/volumes" Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.770078 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.963020 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-fernet-keys\") pod \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.963062 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-combined-ca-bundle\") pod \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.963094 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-config-data\") pod \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.963124 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-scripts\") pod \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.963145 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-credential-keys\") pod \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.963210 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqtjx\" (UniqueName: \"kubernetes.io/projected/242f5b62-3d4e-4de1-ba14-3a3acce4a455-kube-api-access-rqtjx\") pod \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\" (UID: \"242f5b62-3d4e-4de1-ba14-3a3acce4a455\") " Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.968007 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-scripts" (OuterVolumeSpecName: "scripts") pod "242f5b62-3d4e-4de1-ba14-3a3acce4a455" (UID: "242f5b62-3d4e-4de1-ba14-3a3acce4a455"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.968641 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "242f5b62-3d4e-4de1-ba14-3a3acce4a455" (UID: "242f5b62-3d4e-4de1-ba14-3a3acce4a455"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.970981 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "242f5b62-3d4e-4de1-ba14-3a3acce4a455" (UID: "242f5b62-3d4e-4de1-ba14-3a3acce4a455"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.987121 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242f5b62-3d4e-4de1-ba14-3a3acce4a455-kube-api-access-rqtjx" (OuterVolumeSpecName: "kube-api-access-rqtjx") pod "242f5b62-3d4e-4de1-ba14-3a3acce4a455" (UID: "242f5b62-3d4e-4de1-ba14-3a3acce4a455"). InnerVolumeSpecName "kube-api-access-rqtjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.991254 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-config-data" (OuterVolumeSpecName: "config-data") pod "242f5b62-3d4e-4de1-ba14-3a3acce4a455" (UID: "242f5b62-3d4e-4de1-ba14-3a3acce4a455"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:12:12 crc kubenswrapper[4780]: I1205 08:12:12.994027 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "242f5b62-3d4e-4de1-ba14-3a3acce4a455" (UID: "242f5b62-3d4e-4de1-ba14-3a3acce4a455"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.065433 4780 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.065466 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqtjx\" (UniqueName: \"kubernetes.io/projected/242f5b62-3d4e-4de1-ba14-3a3acce4a455-kube-api-access-rqtjx\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.065481 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.065489 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.065498 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.065508 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/242f5b62-3d4e-4de1-ba14-3a3acce4a455-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.450356 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-69nl5" event={"ID":"242f5b62-3d4e-4de1-ba14-3a3acce4a455","Type":"ContainerDied","Data":"f5b19f44ea9cf9d0ed6710bd0407f3e1a4594858f4a334297da61dea4d5a369c"} Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.450729 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b19f44ea9cf9d0ed6710bd0407f3e1a4594858f4a334297da61dea4d5a369c" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.450440 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-69nl5" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.538827 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5f4c59d686-tl2g6"] Dec 05 08:12:13 crc kubenswrapper[4780]: E1205 08:12:13.539368 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf447a04-4ae4-43f8-84ed-6fc92c2a6445" containerName="init" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.539391 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf447a04-4ae4-43f8-84ed-6fc92c2a6445" containerName="init" Dec 05 08:12:13 crc kubenswrapper[4780]: E1205 08:12:13.539404 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf447a04-4ae4-43f8-84ed-6fc92c2a6445" containerName="dnsmasq-dns" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.539413 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf447a04-4ae4-43f8-84ed-6fc92c2a6445" containerName="dnsmasq-dns" Dec 05 08:12:13 crc kubenswrapper[4780]: E1205 08:12:13.539438 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242f5b62-3d4e-4de1-ba14-3a3acce4a455" containerName="keystone-bootstrap" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.539448 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="242f5b62-3d4e-4de1-ba14-3a3acce4a455" containerName="keystone-bootstrap" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.539647 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="242f5b62-3d4e-4de1-ba14-3a3acce4a455" containerName="keystone-bootstrap" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.539667 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf447a04-4ae4-43f8-84ed-6fc92c2a6445" containerName="dnsmasq-dns" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.540332 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.543614 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vqdl6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.543852 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.544041 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.544179 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.544484 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.544632 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.548950 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f4c59d686-tl2g6"] Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.673740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-fernet-keys\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.673788 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-internal-tls-certs\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.673821 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-combined-ca-bundle\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.673849 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-credential-keys\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.673907 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-config-data\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.673937 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-public-tls-certs\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.673974 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-scripts\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.674027 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwrqw\" (UniqueName: \"kubernetes.io/projected/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-kube-api-access-jwrqw\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.775790 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-fernet-keys\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.775837 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-internal-tls-certs\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.775882 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-combined-ca-bundle\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.775921 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-credential-keys\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.775962 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-config-data\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.775992 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-public-tls-certs\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.776021 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-scripts\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.776052 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwrqw\" (UniqueName: \"kubernetes.io/projected/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-kube-api-access-jwrqw\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.779238 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-fernet-keys\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.779400 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-combined-ca-bundle\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.780313 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-scripts\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.780552 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-public-tls-certs\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.780982 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-config-data\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.780987 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-internal-tls-certs\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.781344 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-credential-keys\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.794547 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwrqw\" (UniqueName: \"kubernetes.io/projected/e4f8b205-64a6-4f1d-a468-c5e4e399de9a-kube-api-access-jwrqw\") pod \"keystone-5f4c59d686-tl2g6\" (UID: \"e4f8b205-64a6-4f1d-a468-c5e4e399de9a\") " pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:13 crc kubenswrapper[4780]: I1205 08:12:13.854347 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:14 crc kubenswrapper[4780]: I1205 08:12:14.279334 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f4c59d686-tl2g6"] Dec 05 08:12:14 crc kubenswrapper[4780]: I1205 08:12:14.462711 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f4c59d686-tl2g6" event={"ID":"e4f8b205-64a6-4f1d-a468-c5e4e399de9a","Type":"ContainerStarted","Data":"c65032f8a4b73a239d48555fcefe5df026e4f350fa4d43e587e0e5da385b0d3c"} Dec 05 08:12:15 crc kubenswrapper[4780]: I1205 08:12:15.418464 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bc8d6fdd7-9qnr6" podUID="cf447a04-4ae4-43f8-84ed-6fc92c2a6445" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.12:5353: i/o timeout" Dec 05 08:12:15 crc kubenswrapper[4780]: I1205 08:12:15.473486 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f4c59d686-tl2g6" event={"ID":"e4f8b205-64a6-4f1d-a468-c5e4e399de9a","Type":"ContainerStarted","Data":"9c3b11d2814d1d831ed2aef7aa727315808e7bf1e44e60f4900304fb640725f5"} Dec 05 08:12:15 crc kubenswrapper[4780]: I1205 08:12:15.474617 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:15 crc kubenswrapper[4780]: I1205 08:12:15.506538 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5f4c59d686-tl2g6" podStartSLOduration=2.506510311 podStartE2EDuration="2.506510311s" podCreationTimestamp="2025-12-05 08:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:12:15.495506682 +0000 UTC m=+5169.565023064" watchObservedRunningTime="2025-12-05 08:12:15.506510311 +0000 UTC m=+5169.576026683" Dec 05 08:12:45 crc kubenswrapper[4780]: I1205 08:12:45.449014 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5f4c59d686-tl2g6" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.107973 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.112170 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.130591 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.133988 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.133995 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.135078 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-66lpq" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.223865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.224003 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.224044 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.224106 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rth8\" (UniqueName: \"kubernetes.io/projected/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-kube-api-access-5rth8\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.325750 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rth8\" (UniqueName: \"kubernetes.io/projected/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-kube-api-access-5rth8\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.325835 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.325961 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.326621 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.328262 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.334787 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.335382 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.344707 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rth8\" (UniqueName: \"kubernetes.io/projected/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-kube-api-access-5rth8\") pod \"openstackclient\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.456311 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 08:12:49 crc kubenswrapper[4780]: I1205 08:12:49.953043 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 08:12:49 crc kubenswrapper[4780]: W1205 08:12:49.967553 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6c5dc61_e7ce_4343_b825_1a91fd8016a9.slice/crio-3f04515407d603e556942f1a0f0f7c471b62c1feb82309a0108be97ad6db8df8 WatchSource:0}: Error finding container 3f04515407d603e556942f1a0f0f7c471b62c1feb82309a0108be97ad6db8df8: Status 404 returned error can't find the container with id 3f04515407d603e556942f1a0f0f7c471b62c1feb82309a0108be97ad6db8df8 Dec 05 08:12:50 crc kubenswrapper[4780]: I1205 08:12:50.773693 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e6c5dc61-e7ce-4343-b825-1a91fd8016a9","Type":"ContainerStarted","Data":"3f04515407d603e556942f1a0f0f7c471b62c1feb82309a0108be97ad6db8df8"} Dec 05 08:12:59 crc kubenswrapper[4780]: I1205 08:12:59.907532 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:12:59 crc kubenswrapper[4780]: I1205 08:12:59.908167 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:13:00 crc kubenswrapper[4780]: I1205 08:13:00.862309 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e6c5dc61-e7ce-4343-b825-1a91fd8016a9","Type":"ContainerStarted","Data":"7d81b6d424c7e115ba243794f0155f4a28119eaff4267a1435f37de6bc2e4a02"} Dec 05 08:13:00 crc kubenswrapper[4780]: I1205 08:13:00.882088 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.9142680890000001 podStartE2EDuration="11.882067046s" podCreationTimestamp="2025-12-05 08:12:49 +0000 UTC" firstStartedPulling="2025-12-05 08:12:49.969744732 +0000 UTC m=+5204.039261064" lastFinishedPulling="2025-12-05 08:12:59.937543689 +0000 UTC m=+5214.007060021" observedRunningTime="2025-12-05 08:13:00.881204112 +0000 UTC m=+5214.950720474" watchObservedRunningTime="2025-12-05 08:13:00.882067046 +0000 UTC m=+5214.951583388" Dec 05 08:13:29 crc kubenswrapper[4780]: I1205 08:13:29.908296 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:13:29 crc kubenswrapper[4780]: I1205 08:13:29.909957 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.164719 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fwnzm"] Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.173771 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.191702 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwnzm"] Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.304445 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sb4k\" (UniqueName: \"kubernetes.io/projected/bb172376-d10b-4610-9d1e-56803bd4488b-kube-api-access-7sb4k\") pod \"certified-operators-fwnzm\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.304578 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-catalog-content\") pod \"certified-operators-fwnzm\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.304717 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-utilities\") pod \"certified-operators-fwnzm\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.406082 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-catalog-content\") pod \"certified-operators-fwnzm\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.406197 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-utilities\") pod \"certified-operators-fwnzm\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.406240 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sb4k\" (UniqueName: \"kubernetes.io/projected/bb172376-d10b-4610-9d1e-56803bd4488b-kube-api-access-7sb4k\") pod \"certified-operators-fwnzm\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.406825 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-utilities\") pod \"certified-operators-fwnzm\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.406826 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-catalog-content\") pod \"certified-operators-fwnzm\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.425822 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sb4k\" (UniqueName: \"kubernetes.io/projected/bb172376-d10b-4610-9d1e-56803bd4488b-kube-api-access-7sb4k\") pod \"certified-operators-fwnzm\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.493342 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:32 crc kubenswrapper[4780]: I1205 08:13:32.839155 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwnzm"] Dec 05 08:13:33 crc kubenswrapper[4780]: I1205 08:13:33.132603 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwnzm" event={"ID":"bb172376-d10b-4610-9d1e-56803bd4488b","Type":"ContainerStarted","Data":"1c59888787a3a3628a7cb99cfb329e3805884c6c3f51434bc10f6c0c5ba89eda"} Dec 05 08:13:34 crc kubenswrapper[4780]: I1205 08:13:34.147103 4780 generic.go:334] "Generic (PLEG): container finished" podID="bb172376-d10b-4610-9d1e-56803bd4488b" containerID="192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7" exitCode=0 Dec 05 08:13:34 crc kubenswrapper[4780]: I1205 08:13:34.151070 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwnzm" event={"ID":"bb172376-d10b-4610-9d1e-56803bd4488b","Type":"ContainerDied","Data":"192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7"} Dec 05 08:13:35 crc kubenswrapper[4780]: I1205 08:13:35.156783 4780 generic.go:334] "Generic (PLEG): container finished" podID="bb172376-d10b-4610-9d1e-56803bd4488b" containerID="a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0" exitCode=0 Dec 05 08:13:35 crc kubenswrapper[4780]: I1205 08:13:35.156832 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwnzm" event={"ID":"bb172376-d10b-4610-9d1e-56803bd4488b","Type":"ContainerDied","Data":"a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0"} Dec 05 08:13:36 crc kubenswrapper[4780]: I1205 08:13:36.165054 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwnzm" event={"ID":"bb172376-d10b-4610-9d1e-56803bd4488b","Type":"ContainerStarted","Data":"6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac"} Dec 05 08:13:36 crc kubenswrapper[4780]: I1205 08:13:36.182733 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fwnzm" podStartSLOduration=2.79690504 podStartE2EDuration="4.182712988s" podCreationTimestamp="2025-12-05 08:13:32 +0000 UTC" firstStartedPulling="2025-12-05 08:13:34.159173048 +0000 UTC m=+5248.228689380" lastFinishedPulling="2025-12-05 08:13:35.544980996 +0000 UTC m=+5249.614497328" observedRunningTime="2025-12-05 08:13:36.182526613 +0000 UTC m=+5250.252042945" watchObservedRunningTime="2025-12-05 08:13:36.182712988 +0000 UTC m=+5250.252229320" Dec 05 08:13:42 crc kubenswrapper[4780]: I1205 08:13:42.493751 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:42 crc kubenswrapper[4780]: I1205 08:13:42.494263 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:42 crc kubenswrapper[4780]: I1205 08:13:42.540974 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:43 crc kubenswrapper[4780]: I1205 08:13:43.293554 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:43 crc kubenswrapper[4780]: I1205 08:13:43.349842 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwnzm"] Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.228371 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fwnzm" podUID="bb172376-d10b-4610-9d1e-56803bd4488b" containerName="registry-server" containerID="cri-o://6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac" gracePeriod=2 Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.605727 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.743159 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-utilities\") pod \"bb172376-d10b-4610-9d1e-56803bd4488b\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.743250 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-catalog-content\") pod \"bb172376-d10b-4610-9d1e-56803bd4488b\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.743301 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sb4k\" (UniqueName: \"kubernetes.io/projected/bb172376-d10b-4610-9d1e-56803bd4488b-kube-api-access-7sb4k\") pod \"bb172376-d10b-4610-9d1e-56803bd4488b\" (UID: \"bb172376-d10b-4610-9d1e-56803bd4488b\") " Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.743842 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-utilities" (OuterVolumeSpecName: "utilities") pod "bb172376-d10b-4610-9d1e-56803bd4488b" (UID: "bb172376-d10b-4610-9d1e-56803bd4488b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.748562 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb172376-d10b-4610-9d1e-56803bd4488b-kube-api-access-7sb4k" (OuterVolumeSpecName: "kube-api-access-7sb4k") pod "bb172376-d10b-4610-9d1e-56803bd4488b" (UID: "bb172376-d10b-4610-9d1e-56803bd4488b"). InnerVolumeSpecName "kube-api-access-7sb4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.845544 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sb4k\" (UniqueName: \"kubernetes.io/projected/bb172376-d10b-4610-9d1e-56803bd4488b-kube-api-access-7sb4k\") on node \"crc\" DevicePath \"\"" Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.845589 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.942261 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb172376-d10b-4610-9d1e-56803bd4488b" (UID: "bb172376-d10b-4610-9d1e-56803bd4488b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:13:45 crc kubenswrapper[4780]: I1205 08:13:45.947531 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb172376-d10b-4610-9d1e-56803bd4488b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.237788 4780 generic.go:334] "Generic (PLEG): container finished" podID="bb172376-d10b-4610-9d1e-56803bd4488b" containerID="6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac" exitCode=0 Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.237823 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwnzm" event={"ID":"bb172376-d10b-4610-9d1e-56803bd4488b","Type":"ContainerDied","Data":"6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac"} Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.237852 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwnzm" Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.237905 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwnzm" event={"ID":"bb172376-d10b-4610-9d1e-56803bd4488b","Type":"ContainerDied","Data":"1c59888787a3a3628a7cb99cfb329e3805884c6c3f51434bc10f6c0c5ba89eda"} Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.237940 4780 scope.go:117] "RemoveContainer" containerID="6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac" Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.261106 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwnzm"] Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.265347 4780 scope.go:117] "RemoveContainer" containerID="a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0" Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.267701 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fwnzm"] Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.282364 4780 scope.go:117] "RemoveContainer" containerID="192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7" Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.316710 4780 scope.go:117] "RemoveContainer" containerID="6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac" Dec 05 08:13:46 crc kubenswrapper[4780]: E1205 08:13:46.317123 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac\": container with ID starting with 6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac not found: ID does not exist" containerID="6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac" Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.317152 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac"} err="failed to get container status \"6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac\": rpc error: code = NotFound desc = could not find container \"6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac\": container with ID starting with 6d51c44019deac293eeefc277988c75092101dd287b4ca42d3022b35fef43eac not found: ID does not exist" Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.317172 4780 scope.go:117] "RemoveContainer" containerID="a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0" Dec 05 08:13:46 crc kubenswrapper[4780]: E1205 08:13:46.317404 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0\": container with ID starting with a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0 not found: ID does not exist" containerID="a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0" Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.317424 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0"} err="failed to get container status \"a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0\": rpc error: code = NotFound desc = could not find container \"a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0\": container with ID starting with a5e6cc463e55f5dc098069f50cf041b37848f7cfb0b13cb701de2cf18c3326b0 not found: ID does not exist" Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.317435 4780 scope.go:117] "RemoveContainer" containerID="192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7" Dec 05 08:13:46 crc kubenswrapper[4780]: E1205 08:13:46.317740 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7\": container with ID starting with 192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7 not found: ID does not exist" containerID="192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7" Dec 05 08:13:46 crc kubenswrapper[4780]: I1205 08:13:46.317763 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7"} err="failed to get container status \"192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7\": rpc error: code = NotFound desc = could not find container \"192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7\": container with ID starting with 192420bf9540df0d6a7330a746481dddfdd3382a184ffe2cea4062776794a3b7 not found: ID does not exist" Dec 05 08:13:48 crc kubenswrapper[4780]: I1205 08:13:48.149391 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb172376-d10b-4610-9d1e-56803bd4488b" path="/var/lib/kubelet/pods/bb172376-d10b-4610-9d1e-56803bd4488b/volumes" Dec 05 08:13:59 crc kubenswrapper[4780]: I1205 08:13:59.916937 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:13:59 crc kubenswrapper[4780]: I1205 08:13:59.917605 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:13:59 crc kubenswrapper[4780]: I1205 08:13:59.917657 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 08:13:59 crc kubenswrapper[4780]: I1205 08:13:59.918464 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 08:13:59 crc kubenswrapper[4780]: I1205 08:13:59.918568 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" gracePeriod=600 Dec 05 08:14:00 crc kubenswrapper[4780]: E1205 08:14:00.048963 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:14:00 crc kubenswrapper[4780]: I1205 08:14:00.375194 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" exitCode=0 Dec 05 08:14:00 crc kubenswrapper[4780]: I1205 08:14:00.375235 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368"} Dec 05 08:14:00 crc kubenswrapper[4780]: I1205 08:14:00.375273 4780 scope.go:117] "RemoveContainer" containerID="3ebce31b918ac06eeeb2a54e3d0b7e2c0cbaa5123aab486853c1e76086b69647" Dec 05 08:14:00 crc kubenswrapper[4780]: I1205 08:14:00.375853 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:14:00 crc kubenswrapper[4780]: E1205 08:14:00.376178 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:14:13 crc kubenswrapper[4780]: I1205 08:14:13.139175 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:14:13 crc kubenswrapper[4780]: E1205 08:14:13.140934 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.209173 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-67c5-account-create-update-phb5n"] Dec 05 08:14:21 crc kubenswrapper[4780]: E1205 08:14:21.210095 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb172376-d10b-4610-9d1e-56803bd4488b" containerName="registry-server" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.210111 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb172376-d10b-4610-9d1e-56803bd4488b" containerName="registry-server" Dec 05 08:14:21 crc kubenswrapper[4780]: E1205 08:14:21.210132 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb172376-d10b-4610-9d1e-56803bd4488b" containerName="extract-content" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.210139 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb172376-d10b-4610-9d1e-56803bd4488b" containerName="extract-content" Dec 05 08:14:21 crc kubenswrapper[4780]: E1205 08:14:21.210153 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb172376-d10b-4610-9d1e-56803bd4488b" containerName="extract-utilities" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.210161 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb172376-d10b-4610-9d1e-56803bd4488b" containerName="extract-utilities" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.210332 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb172376-d10b-4610-9d1e-56803bd4488b" containerName="registry-server" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.211027 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-67c5-account-create-update-phb5n" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.213593 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.215234 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5h2k2"] Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.216345 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5h2k2" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.228251 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-67c5-account-create-update-phb5n"] Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.237331 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5h2k2"] Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.340819 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6m8\" (UniqueName: \"kubernetes.io/projected/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-kube-api-access-8c6m8\") pod \"barbican-db-create-5h2k2\" (UID: \"6d1d27de-6a3a-4064-84c7-f7efb078ef9d\") " pod="openstack/barbican-db-create-5h2k2" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.340905 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f787a1a-4869-4803-84a5-6cf8dfac5f48-operator-scripts\") pod \"barbican-67c5-account-create-update-phb5n\" (UID: \"8f787a1a-4869-4803-84a5-6cf8dfac5f48\") " pod="openstack/barbican-67c5-account-create-update-phb5n" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.341001 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dwnp\" (UniqueName: \"kubernetes.io/projected/8f787a1a-4869-4803-84a5-6cf8dfac5f48-kube-api-access-4dwnp\") pod \"barbican-67c5-account-create-update-phb5n\" (UID: \"8f787a1a-4869-4803-84a5-6cf8dfac5f48\") " pod="openstack/barbican-67c5-account-create-update-phb5n" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.341164 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-operator-scripts\") pod \"barbican-db-create-5h2k2\" (UID: \"6d1d27de-6a3a-4064-84c7-f7efb078ef9d\") " pod="openstack/barbican-db-create-5h2k2" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.442389 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6m8\" (UniqueName: \"kubernetes.io/projected/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-kube-api-access-8c6m8\") pod \"barbican-db-create-5h2k2\" (UID: \"6d1d27de-6a3a-4064-84c7-f7efb078ef9d\") " pod="openstack/barbican-db-create-5h2k2" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.442439 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f787a1a-4869-4803-84a5-6cf8dfac5f48-operator-scripts\") pod \"barbican-67c5-account-create-update-phb5n\" (UID: \"8f787a1a-4869-4803-84a5-6cf8dfac5f48\") " pod="openstack/barbican-67c5-account-create-update-phb5n" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.442494 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dwnp\" (UniqueName: \"kubernetes.io/projected/8f787a1a-4869-4803-84a5-6cf8dfac5f48-kube-api-access-4dwnp\") pod \"barbican-67c5-account-create-update-phb5n\" (UID: \"8f787a1a-4869-4803-84a5-6cf8dfac5f48\") " pod="openstack/barbican-67c5-account-create-update-phb5n" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.442537 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-operator-scripts\") pod \"barbican-db-create-5h2k2\" (UID: \"6d1d27de-6a3a-4064-84c7-f7efb078ef9d\") " pod="openstack/barbican-db-create-5h2k2" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.443425 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-operator-scripts\") pod \"barbican-db-create-5h2k2\" (UID: \"6d1d27de-6a3a-4064-84c7-f7efb078ef9d\") " pod="openstack/barbican-db-create-5h2k2" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.443473 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f787a1a-4869-4803-84a5-6cf8dfac5f48-operator-scripts\") pod \"barbican-67c5-account-create-update-phb5n\" (UID: \"8f787a1a-4869-4803-84a5-6cf8dfac5f48\") " pod="openstack/barbican-67c5-account-create-update-phb5n" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.460490 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6m8\" (UniqueName: \"kubernetes.io/projected/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-kube-api-access-8c6m8\") pod \"barbican-db-create-5h2k2\" (UID: \"6d1d27de-6a3a-4064-84c7-f7efb078ef9d\") " pod="openstack/barbican-db-create-5h2k2" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.460591 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dwnp\" (UniqueName: \"kubernetes.io/projected/8f787a1a-4869-4803-84a5-6cf8dfac5f48-kube-api-access-4dwnp\") pod \"barbican-67c5-account-create-update-phb5n\" (UID: \"8f787a1a-4869-4803-84a5-6cf8dfac5f48\") " pod="openstack/barbican-67c5-account-create-update-phb5n" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.531775 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-67c5-account-create-update-phb5n" Dec 05 08:14:21 crc kubenswrapper[4780]: I1205 08:14:21.542497 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5h2k2" Dec 05 08:14:22 crc kubenswrapper[4780]: I1205 08:14:22.029735 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5h2k2"] Dec 05 08:14:22 crc kubenswrapper[4780]: I1205 08:14:22.065750 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-67c5-account-create-update-phb5n"] Dec 05 08:14:22 crc kubenswrapper[4780]: W1205 08:14:22.077954 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f787a1a_4869_4803_84a5_6cf8dfac5f48.slice/crio-f7071be2dcd76b04512e8d9165fd1ccde9c32ec4a40bd675f755aa324c8f8216 WatchSource:0}: Error finding container f7071be2dcd76b04512e8d9165fd1ccde9c32ec4a40bd675f755aa324c8f8216: Status 404 returned error can't find the container with id f7071be2dcd76b04512e8d9165fd1ccde9c32ec4a40bd675f755aa324c8f8216 Dec 05 08:14:22 crc kubenswrapper[4780]: I1205 08:14:22.559231 4780 generic.go:334] "Generic (PLEG): container finished" podID="8f787a1a-4869-4803-84a5-6cf8dfac5f48" containerID="5709cd09c1bd26a1d95445f7b0ff275a6baecb8863d32ea8a246d08c1f0e5b37" exitCode=0 Dec 05 08:14:22 crc kubenswrapper[4780]: I1205 08:14:22.559287 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-67c5-account-create-update-phb5n" event={"ID":"8f787a1a-4869-4803-84a5-6cf8dfac5f48","Type":"ContainerDied","Data":"5709cd09c1bd26a1d95445f7b0ff275a6baecb8863d32ea8a246d08c1f0e5b37"} Dec 05 08:14:22 crc kubenswrapper[4780]: I1205 08:14:22.559335 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-67c5-account-create-update-phb5n" event={"ID":"8f787a1a-4869-4803-84a5-6cf8dfac5f48","Type":"ContainerStarted","Data":"f7071be2dcd76b04512e8d9165fd1ccde9c32ec4a40bd675f755aa324c8f8216"} Dec 05 08:14:22 crc kubenswrapper[4780]: I1205 08:14:22.561337 4780 generic.go:334] "Generic (PLEG): container finished" podID="6d1d27de-6a3a-4064-84c7-f7efb078ef9d" containerID="220751bda835517fa3f8e0c65873054128f856244ab50e39c1ebcfe699e1cf4a" exitCode=0 Dec 05 08:14:22 crc kubenswrapper[4780]: I1205 08:14:22.561371 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5h2k2" event={"ID":"6d1d27de-6a3a-4064-84c7-f7efb078ef9d","Type":"ContainerDied","Data":"220751bda835517fa3f8e0c65873054128f856244ab50e39c1ebcfe699e1cf4a"} Dec 05 08:14:22 crc kubenswrapper[4780]: I1205 08:14:22.561399 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5h2k2" event={"ID":"6d1d27de-6a3a-4064-84c7-f7efb078ef9d","Type":"ContainerStarted","Data":"d99424e9469ea1bd9bb4219674afa6940b8b6082ba7602ef5854689bedefcdc6"} Dec 05 08:14:23 crc kubenswrapper[4780]: I1205 08:14:23.947134 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5h2k2" Dec 05 08:14:23 crc kubenswrapper[4780]: I1205 08:14:23.952428 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-67c5-account-create-update-phb5n" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.022317 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-operator-scripts\") pod \"6d1d27de-6a3a-4064-84c7-f7efb078ef9d\" (UID: \"6d1d27de-6a3a-4064-84c7-f7efb078ef9d\") " Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.022409 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c6m8\" (UniqueName: \"kubernetes.io/projected/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-kube-api-access-8c6m8\") pod \"6d1d27de-6a3a-4064-84c7-f7efb078ef9d\" (UID: \"6d1d27de-6a3a-4064-84c7-f7efb078ef9d\") " Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.025542 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d1d27de-6a3a-4064-84c7-f7efb078ef9d" (UID: "6d1d27de-6a3a-4064-84c7-f7efb078ef9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.032455 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-kube-api-access-8c6m8" (OuterVolumeSpecName: "kube-api-access-8c6m8") pod "6d1d27de-6a3a-4064-84c7-f7efb078ef9d" (UID: "6d1d27de-6a3a-4064-84c7-f7efb078ef9d"). InnerVolumeSpecName "kube-api-access-8c6m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.124379 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f787a1a-4869-4803-84a5-6cf8dfac5f48-operator-scripts\") pod \"8f787a1a-4869-4803-84a5-6cf8dfac5f48\" (UID: \"8f787a1a-4869-4803-84a5-6cf8dfac5f48\") " Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.124433 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dwnp\" (UniqueName: \"kubernetes.io/projected/8f787a1a-4869-4803-84a5-6cf8dfac5f48-kube-api-access-4dwnp\") pod \"8f787a1a-4869-4803-84a5-6cf8dfac5f48\" (UID: \"8f787a1a-4869-4803-84a5-6cf8dfac5f48\") " Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.124905 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.124921 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c6m8\" (UniqueName: \"kubernetes.io/projected/6d1d27de-6a3a-4064-84c7-f7efb078ef9d-kube-api-access-8c6m8\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.125486 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f787a1a-4869-4803-84a5-6cf8dfac5f48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f787a1a-4869-4803-84a5-6cf8dfac5f48" (UID: "8f787a1a-4869-4803-84a5-6cf8dfac5f48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.127751 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f787a1a-4869-4803-84a5-6cf8dfac5f48-kube-api-access-4dwnp" (OuterVolumeSpecName: "kube-api-access-4dwnp") pod "8f787a1a-4869-4803-84a5-6cf8dfac5f48" (UID: "8f787a1a-4869-4803-84a5-6cf8dfac5f48"). InnerVolumeSpecName "kube-api-access-4dwnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.139384 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:14:24 crc kubenswrapper[4780]: E1205 08:14:24.139685 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.226797 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f787a1a-4869-4803-84a5-6cf8dfac5f48-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.226835 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dwnp\" (UniqueName: \"kubernetes.io/projected/8f787a1a-4869-4803-84a5-6cf8dfac5f48-kube-api-access-4dwnp\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.578133 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5h2k2" event={"ID":"6d1d27de-6a3a-4064-84c7-f7efb078ef9d","Type":"ContainerDied","Data":"d99424e9469ea1bd9bb4219674afa6940b8b6082ba7602ef5854689bedefcdc6"} Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.578172 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99424e9469ea1bd9bb4219674afa6940b8b6082ba7602ef5854689bedefcdc6" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.578189 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5h2k2" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.582069 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-67c5-account-create-update-phb5n" event={"ID":"8f787a1a-4869-4803-84a5-6cf8dfac5f48","Type":"ContainerDied","Data":"f7071be2dcd76b04512e8d9165fd1ccde9c32ec4a40bd675f755aa324c8f8216"} Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.582117 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7071be2dcd76b04512e8d9165fd1ccde9c32ec4a40bd675f755aa324c8f8216" Dec 05 08:14:24 crc kubenswrapper[4780]: I1205 08:14:24.582119 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-67c5-account-create-update-phb5n" Dec 05 08:14:25 crc kubenswrapper[4780]: I1205 08:14:25.098392 4780 scope.go:117] "RemoveContainer" containerID="05da0023d35a05f0e3fd6826718e6ab5c788ab15739abdab188c9decacfd5642" Dec 05 08:14:25 crc kubenswrapper[4780]: I1205 08:14:25.125228 4780 scope.go:117] "RemoveContainer" containerID="f2e313819c3e4c93a33ffefe562bf70fe54ed7edd68fd7599fba93ea53e02b86" Dec 05 08:14:25 crc kubenswrapper[4780]: I1205 08:14:25.166156 4780 scope.go:117] "RemoveContainer" containerID="b59ea3347f8365dc73da8c28514679d00faabb6aa675cc7c7bdb325c48320d6c" Dec 05 08:14:25 crc kubenswrapper[4780]: I1205 08:14:25.207061 4780 scope.go:117] "RemoveContainer" containerID="cd154e6d0f9e3600405267c713cfa19dec474a962d6b8f2011ff782cc0e7af5f" Dec 05 08:14:25 crc kubenswrapper[4780]: I1205 08:14:25.248033 4780 scope.go:117] "RemoveContainer" containerID="f82acadf0bbdfc7229bbdb65cc4714410e1d087f63854d2968a8fa919853bd3c" Dec 05 08:14:25 crc kubenswrapper[4780]: I1205 08:14:25.282124 4780 scope.go:117] "RemoveContainer" containerID="200bdc8e239a3655a683f4ce62751ea11b5005aaacb3f00cff8972212af51cfd" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.525687 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zxqc4"] Dec 05 08:14:26 crc kubenswrapper[4780]: E1205 08:14:26.526522 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f787a1a-4869-4803-84a5-6cf8dfac5f48" containerName="mariadb-account-create-update" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.526539 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f787a1a-4869-4803-84a5-6cf8dfac5f48" containerName="mariadb-account-create-update" Dec 05 08:14:26 crc kubenswrapper[4780]: E1205 08:14:26.526557 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1d27de-6a3a-4064-84c7-f7efb078ef9d" containerName="mariadb-database-create" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.526565 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1d27de-6a3a-4064-84c7-f7efb078ef9d" containerName="mariadb-database-create" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.526742 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f787a1a-4869-4803-84a5-6cf8dfac5f48" containerName="mariadb-account-create-update" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.526755 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1d27de-6a3a-4064-84c7-f7efb078ef9d" containerName="mariadb-database-create" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.527453 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.531322 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.531336 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pqrnf" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.544216 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zxqc4"] Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.669932 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-db-sync-config-data\") pod \"barbican-db-sync-zxqc4\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.669985 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-combined-ca-bundle\") pod \"barbican-db-sync-zxqc4\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.670027 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6w4t\" (UniqueName: \"kubernetes.io/projected/0600caa7-8925-4bd9-adde-5ffbc2b3e732-kube-api-access-x6w4t\") pod \"barbican-db-sync-zxqc4\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.772107 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-db-sync-config-data\") pod \"barbican-db-sync-zxqc4\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.772151 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-combined-ca-bundle\") pod \"barbican-db-sync-zxqc4\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.772195 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6w4t\" (UniqueName: \"kubernetes.io/projected/0600caa7-8925-4bd9-adde-5ffbc2b3e732-kube-api-access-x6w4t\") pod \"barbican-db-sync-zxqc4\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.779008 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-combined-ca-bundle\") pod \"barbican-db-sync-zxqc4\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.785869 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-db-sync-config-data\") pod \"barbican-db-sync-zxqc4\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.794795 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6w4t\" (UniqueName: \"kubernetes.io/projected/0600caa7-8925-4bd9-adde-5ffbc2b3e732-kube-api-access-x6w4t\") pod \"barbican-db-sync-zxqc4\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:26 crc kubenswrapper[4780]: I1205 08:14:26.846518 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:27 crc kubenswrapper[4780]: I1205 08:14:27.298507 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zxqc4"] Dec 05 08:14:27 crc kubenswrapper[4780]: I1205 08:14:27.607388 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zxqc4" event={"ID":"0600caa7-8925-4bd9-adde-5ffbc2b3e732","Type":"ContainerStarted","Data":"ced1d4f2d42902568b51f8def11bd062e30d0f9cf37a41eba1636255bac142cd"} Dec 05 08:14:31 crc kubenswrapper[4780]: I1205 08:14:31.642272 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zxqc4" event={"ID":"0600caa7-8925-4bd9-adde-5ffbc2b3e732","Type":"ContainerStarted","Data":"b9a486fe16ae93d036150541afa1a42229631d7e764e62cfaad9d7fa9fcf55cd"} Dec 05 08:14:31 crc kubenswrapper[4780]: I1205 08:14:31.655805 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zxqc4" podStartSLOduration=1.7202270670000002 podStartE2EDuration="5.655785646s" podCreationTimestamp="2025-12-05 08:14:26 +0000 UTC" firstStartedPulling="2025-12-05 08:14:27.31476406 +0000 UTC m=+5301.384280392" lastFinishedPulling="2025-12-05 08:14:31.250322629 +0000 UTC m=+5305.319838971" observedRunningTime="2025-12-05 08:14:31.653951265 +0000 UTC m=+5305.723467617" watchObservedRunningTime="2025-12-05 08:14:31.655785646 +0000 UTC m=+5305.725301978" Dec 05 08:14:32 crc kubenswrapper[4780]: I1205 08:14:32.651650 4780 generic.go:334] "Generic (PLEG): container finished" podID="0600caa7-8925-4bd9-adde-5ffbc2b3e732" containerID="b9a486fe16ae93d036150541afa1a42229631d7e764e62cfaad9d7fa9fcf55cd" exitCode=0 Dec 05 08:14:32 crc kubenswrapper[4780]: I1205 08:14:32.651838 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zxqc4" event={"ID":"0600caa7-8925-4bd9-adde-5ffbc2b3e732","Type":"ContainerDied","Data":"b9a486fe16ae93d036150541afa1a42229631d7e764e62cfaad9d7fa9fcf55cd"} Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.144525 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.153108 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6w4t\" (UniqueName: \"kubernetes.io/projected/0600caa7-8925-4bd9-adde-5ffbc2b3e732-kube-api-access-x6w4t\") pod \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.153236 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-combined-ca-bundle\") pod \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.153342 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-db-sync-config-data\") pod \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\" (UID: \"0600caa7-8925-4bd9-adde-5ffbc2b3e732\") " Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.161855 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0600caa7-8925-4bd9-adde-5ffbc2b3e732" (UID: "0600caa7-8925-4bd9-adde-5ffbc2b3e732"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.193375 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0600caa7-8925-4bd9-adde-5ffbc2b3e732-kube-api-access-x6w4t" (OuterVolumeSpecName: "kube-api-access-x6w4t") pod "0600caa7-8925-4bd9-adde-5ffbc2b3e732" (UID: "0600caa7-8925-4bd9-adde-5ffbc2b3e732"). InnerVolumeSpecName "kube-api-access-x6w4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.201184 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0600caa7-8925-4bd9-adde-5ffbc2b3e732" (UID: "0600caa7-8925-4bd9-adde-5ffbc2b3e732"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.255366 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.255404 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0600caa7-8925-4bd9-adde-5ffbc2b3e732-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.255419 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6w4t\" (UniqueName: \"kubernetes.io/projected/0600caa7-8925-4bd9-adde-5ffbc2b3e732-kube-api-access-x6w4t\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.667168 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zxqc4" Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.667089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zxqc4" event={"ID":"0600caa7-8925-4bd9-adde-5ffbc2b3e732","Type":"ContainerDied","Data":"ced1d4f2d42902568b51f8def11bd062e30d0f9cf37a41eba1636255bac142cd"} Dec 05 08:14:34 crc kubenswrapper[4780]: I1205 08:14:34.668228 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced1d4f2d42902568b51f8def11bd062e30d0f9cf37a41eba1636255bac142cd" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.024070 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-d7cbcbfd9-hlgct"] Dec 05 08:14:35 crc kubenswrapper[4780]: E1205 08:14:35.024440 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0600caa7-8925-4bd9-adde-5ffbc2b3e732" containerName="barbican-db-sync" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.024451 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0600caa7-8925-4bd9-adde-5ffbc2b3e732" containerName="barbican-db-sync" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.024600 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0600caa7-8925-4bd9-adde-5ffbc2b3e732" containerName="barbican-db-sync" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.025440 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.034327 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d7cbcbfd9-hlgct"] Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.034363 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.034466 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.035114 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pqrnf" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.049543 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7b5984894b-tnszk"] Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.051018 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.056808 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.106238 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b5984894b-tnszk"] Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.113000 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bb4bd68fc-jmr8q"] Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.114431 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.120699 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb4bd68fc-jmr8q"] Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.190225 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6df6665768-zr7sg"] Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.191974 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.193765 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqnlq\" (UniqueName: \"kubernetes.io/projected/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-kube-api-access-vqnlq\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.194043 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-combined-ca-bundle\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.194118 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e45167-1f28-490f-aa73-35137f2d0f1a-combined-ca-bundle\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.194185 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40e45167-1f28-490f-aa73-35137f2d0f1a-config-data-custom\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.194205 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-config-data-custom\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.194227 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-config-data\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.194249 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e45167-1f28-490f-aa73-35137f2d0f1a-logs\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.194265 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pthfw\" (UniqueName: \"kubernetes.io/projected/40e45167-1f28-490f-aa73-35137f2d0f1a-kube-api-access-pthfw\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.194378 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-logs\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.194487 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e45167-1f28-490f-aa73-35137f2d0f1a-config-data\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.198660 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.203313 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6df6665768-zr7sg"] Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.295452 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-logs\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.295502 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-combined-ca-bundle\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.295540 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stcxf\" (UniqueName: \"kubernetes.io/projected/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-kube-api-access-stcxf\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.295556 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-dns-svc\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.295578 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e45167-1f28-490f-aa73-35137f2d0f1a-config-data\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.295601 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqnlq\" (UniqueName: \"kubernetes.io/projected/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-kube-api-access-vqnlq\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.295626 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msxj8\" (UniqueName: \"kubernetes.io/projected/12316d0f-f90c-49f2-a30f-9fc52b91714e-kube-api-access-msxj8\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.295904 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data-custom\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.295998 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296050 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-combined-ca-bundle\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296057 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-logs\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296122 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e45167-1f28-490f-aa73-35137f2d0f1a-combined-ca-bundle\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296157 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-config\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296236 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40e45167-1f28-490f-aa73-35137f2d0f1a-config-data-custom\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296267 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-config-data-custom\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296286 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-config-data\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296306 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e45167-1f28-490f-aa73-35137f2d0f1a-logs\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296329 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pthfw\" (UniqueName: \"kubernetes.io/projected/40e45167-1f28-490f-aa73-35137f2d0f1a-kube-api-access-pthfw\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296375 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12316d0f-f90c-49f2-a30f-9fc52b91714e-logs\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296411 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296444 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.296806 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40e45167-1f28-490f-aa73-35137f2d0f1a-logs\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.300392 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-config-data-custom\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.300770 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40e45167-1f28-490f-aa73-35137f2d0f1a-config-data\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.300975 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40e45167-1f28-490f-aa73-35137f2d0f1a-combined-ca-bundle\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.301216 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-combined-ca-bundle\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.301708 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40e45167-1f28-490f-aa73-35137f2d0f1a-config-data-custom\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.304998 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-config-data\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.315548 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqnlq\" (UniqueName: \"kubernetes.io/projected/dfccf240-dcd1-4f3d-92c5-3c195a1a481d-kube-api-access-vqnlq\") pod \"barbican-worker-d7cbcbfd9-hlgct\" (UID: \"dfccf240-dcd1-4f3d-92c5-3c195a1a481d\") " pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.317995 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pthfw\" (UniqueName: \"kubernetes.io/projected/40e45167-1f28-490f-aa73-35137f2d0f1a-kube-api-access-pthfw\") pod \"barbican-keystone-listener-7b5984894b-tnszk\" (UID: \"40e45167-1f28-490f-aa73-35137f2d0f1a\") " pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.348918 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d7cbcbfd9-hlgct" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.384021 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.398160 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12316d0f-f90c-49f2-a30f-9fc52b91714e-logs\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.398211 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.398234 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.398270 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-combined-ca-bundle\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.398305 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stcxf\" (UniqueName: \"kubernetes.io/projected/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-kube-api-access-stcxf\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.398322 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-dns-svc\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.398345 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msxj8\" (UniqueName: \"kubernetes.io/projected/12316d0f-f90c-49f2-a30f-9fc52b91714e-kube-api-access-msxj8\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.398382 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data-custom\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.398410 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.398442 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-config\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.399131 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-nb\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.399137 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-config\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.400043 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-dns-svc\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.400645 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-sb\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.400757 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12316d0f-f90c-49f2-a30f-9fc52b91714e-logs\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.402923 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data-custom\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.403584 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-combined-ca-bundle\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.412937 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.416777 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stcxf\" (UniqueName: \"kubernetes.io/projected/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-kube-api-access-stcxf\") pod \"dnsmasq-dns-5bb4bd68fc-jmr8q\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.422981 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msxj8\" (UniqueName: \"kubernetes.io/projected/12316d0f-f90c-49f2-a30f-9fc52b91714e-kube-api-access-msxj8\") pod \"barbican-api-6df6665768-zr7sg\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.439931 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.512491 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.735303 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d7cbcbfd9-hlgct"] Dec 05 08:14:35 crc kubenswrapper[4780]: W1205 08:14:35.742327 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfccf240_dcd1_4f3d_92c5_3c195a1a481d.slice/crio-f3fa4110a0f319f189a2e6473fce5ff720cd60ca5dbabfa88507215d6f130a05 WatchSource:0}: Error finding container f3fa4110a0f319f189a2e6473fce5ff720cd60ca5dbabfa88507215d6f130a05: Status 404 returned error can't find the container with id f3fa4110a0f319f189a2e6473fce5ff720cd60ca5dbabfa88507215d6f130a05 Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.778767 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b5984894b-tnszk"] Dec 05 08:14:35 crc kubenswrapper[4780]: I1205 08:14:35.861784 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bb4bd68fc-jmr8q"] Dec 05 08:14:36 crc kubenswrapper[4780]: I1205 08:14:36.126544 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6df6665768-zr7sg"] Dec 05 08:14:36 crc kubenswrapper[4780]: W1205 08:14:36.177402 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12316d0f_f90c_49f2_a30f_9fc52b91714e.slice/crio-e87f2dd1f8a1d9d9cee940e22a7127d7a77d32a3d52ec68ca51b222218c408dc WatchSource:0}: Error finding container e87f2dd1f8a1d9d9cee940e22a7127d7a77d32a3d52ec68ca51b222218c408dc: Status 404 returned error can't find the container with id e87f2dd1f8a1d9d9cee940e22a7127d7a77d32a3d52ec68ca51b222218c408dc Dec 05 08:14:36 crc kubenswrapper[4780]: I1205 08:14:36.693097 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6df6665768-zr7sg" event={"ID":"12316d0f-f90c-49f2-a30f-9fc52b91714e","Type":"ContainerStarted","Data":"7d7defa5f0b96f025b86f67bc0e1d26b2570fc0df74d5ad63367066ce374305d"} Dec 05 08:14:36 crc kubenswrapper[4780]: I1205 08:14:36.693434 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6df6665768-zr7sg" event={"ID":"12316d0f-f90c-49f2-a30f-9fc52b91714e","Type":"ContainerStarted","Data":"e87f2dd1f8a1d9d9cee940e22a7127d7a77d32a3d52ec68ca51b222218c408dc"} Dec 05 08:14:36 crc kubenswrapper[4780]: I1205 08:14:36.694959 4780 generic.go:334] "Generic (PLEG): container finished" podID="d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" containerID="f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0" exitCode=0 Dec 05 08:14:36 crc kubenswrapper[4780]: I1205 08:14:36.695039 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" event={"ID":"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022","Type":"ContainerDied","Data":"f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0"} Dec 05 08:14:36 crc kubenswrapper[4780]: I1205 08:14:36.695100 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" event={"ID":"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022","Type":"ContainerStarted","Data":"f71a264aabe2256422b2910e1df231ab317ffb0f522f8da2f64d5062f050e3fe"} Dec 05 08:14:36 crc kubenswrapper[4780]: I1205 08:14:36.696942 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" event={"ID":"40e45167-1f28-490f-aa73-35137f2d0f1a","Type":"ContainerStarted","Data":"83fb59740fe67381dc267f7951b4ae3c5e2637ee8e62499cd9a8a3acd86ce0a3"} Dec 05 08:14:36 crc kubenswrapper[4780]: I1205 08:14:36.699095 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d7cbcbfd9-hlgct" event={"ID":"dfccf240-dcd1-4f3d-92c5-3c195a1a481d","Type":"ContainerStarted","Data":"f3fa4110a0f319f189a2e6473fce5ff720cd60ca5dbabfa88507215d6f130a05"} Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.150310 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55486c8ff8-8th4d"] Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.151735 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.159755 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.160007 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.171425 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55486c8ff8-8th4d"] Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.349156 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-config-data\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.349509 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-internal-tls-certs\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.349538 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-combined-ca-bundle\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.349561 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-public-tls-certs\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.349582 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-logs\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.349624 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pndrz\" (UniqueName: \"kubernetes.io/projected/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-kube-api-access-pndrz\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.349657 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-config-data-custom\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.451005 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-internal-tls-certs\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.451066 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-combined-ca-bundle\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.451096 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-public-tls-certs\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.451136 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-logs\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.451182 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pndrz\" (UniqueName: \"kubernetes.io/projected/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-kube-api-access-pndrz\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.451228 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-config-data-custom\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.451315 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-config-data\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.457673 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-logs\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.464467 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-internal-tls-certs\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.467343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-combined-ca-bundle\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.467797 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-public-tls-certs\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.470395 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-config-data\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.476443 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-config-data-custom\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.494233 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pndrz\" (UniqueName: \"kubernetes.io/projected/e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e-kube-api-access-pndrz\") pod \"barbican-api-55486c8ff8-8th4d\" (UID: \"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e\") " pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.724476 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" event={"ID":"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022","Type":"ContainerStarted","Data":"4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787"} Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.724631 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.742350 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" event={"ID":"40e45167-1f28-490f-aa73-35137f2d0f1a","Type":"ContainerStarted","Data":"630f8a316a2434875178826eb7d75943906004bcb37dba50a70324619eb71840"} Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.761842 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" podStartSLOduration=2.761823291 podStartE2EDuration="2.761823291s" podCreationTimestamp="2025-12-05 08:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:14:37.740285235 +0000 UTC m=+5311.809801557" watchObservedRunningTime="2025-12-05 08:14:37.761823291 +0000 UTC m=+5311.831339623" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.765332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d7cbcbfd9-hlgct" event={"ID":"dfccf240-dcd1-4f3d-92c5-3c195a1a481d","Type":"ContainerStarted","Data":"2e51cc63ae624b9e2def5db23bce20e264ea9976c12d2f6cb8d77080f2cc7296"} Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.765376 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d7cbcbfd9-hlgct" event={"ID":"dfccf240-dcd1-4f3d-92c5-3c195a1a481d","Type":"ContainerStarted","Data":"a852b63d5b84201f22906267e309739de8610024d8c01053403936dc9b37ec62"} Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.772498 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.776433 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6df6665768-zr7sg" event={"ID":"12316d0f-f90c-49f2-a30f-9fc52b91714e","Type":"ContainerStarted","Data":"697df2e2fbccdaee7cc5a6262c2dacef5ff13e09b9516ecb857ee16a856b483f"} Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.777296 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.777330 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.778860 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" podStartSLOduration=1.528914291 podStartE2EDuration="2.778841784s" podCreationTimestamp="2025-12-05 08:14:35 +0000 UTC" firstStartedPulling="2025-12-05 08:14:35.790904491 +0000 UTC m=+5309.860420823" lastFinishedPulling="2025-12-05 08:14:37.040831984 +0000 UTC m=+5311.110348316" observedRunningTime="2025-12-05 08:14:37.770313992 +0000 UTC m=+5311.839830324" watchObservedRunningTime="2025-12-05 08:14:37.778841784 +0000 UTC m=+5311.848358116" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.792237 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-d7cbcbfd9-hlgct" podStartSLOduration=2.447424135 podStartE2EDuration="3.792216667s" podCreationTimestamp="2025-12-05 08:14:34 +0000 UTC" firstStartedPulling="2025-12-05 08:14:35.744448698 +0000 UTC m=+5309.813965030" lastFinishedPulling="2025-12-05 08:14:37.08924123 +0000 UTC m=+5311.158757562" observedRunningTime="2025-12-05 08:14:37.790373777 +0000 UTC m=+5311.859890109" watchObservedRunningTime="2025-12-05 08:14:37.792216667 +0000 UTC m=+5311.861732999" Dec 05 08:14:37 crc kubenswrapper[4780]: I1205 08:14:37.813873 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6df6665768-zr7sg" podStartSLOduration=2.813850026 podStartE2EDuration="2.813850026s" podCreationTimestamp="2025-12-05 08:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:14:37.81250656 +0000 UTC m=+5311.882022892" watchObservedRunningTime="2025-12-05 08:14:37.813850026 +0000 UTC m=+5311.883366358" Dec 05 08:14:38 crc kubenswrapper[4780]: I1205 08:14:38.260223 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55486c8ff8-8th4d"] Dec 05 08:14:38 crc kubenswrapper[4780]: W1205 08:14:38.260624 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode66ff5c5_7e7c_4ee2_9395_a143e5dc0c1e.slice/crio-e3586025ba5c0f27870a207fd1114dc100ca291b3a9a78519a54a969c6397811 WatchSource:0}: Error finding container e3586025ba5c0f27870a207fd1114dc100ca291b3a9a78519a54a969c6397811: Status 404 returned error can't find the container with id e3586025ba5c0f27870a207fd1114dc100ca291b3a9a78519a54a969c6397811 Dec 05 08:14:38 crc kubenswrapper[4780]: I1205 08:14:38.791334 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55486c8ff8-8th4d" event={"ID":"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e","Type":"ContainerStarted","Data":"ac5b37d069dd2c66c11c3801344afaf1fb81b20c1f71e38b75be528def243e6d"} Dec 05 08:14:38 crc kubenswrapper[4780]: I1205 08:14:38.792626 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55486c8ff8-8th4d" event={"ID":"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e","Type":"ContainerStarted","Data":"a62f958a309ae9092e4fc3dc642a163ba32ea5a00c8d79459523c59e5d6d5368"} Dec 05 08:14:38 crc kubenswrapper[4780]: I1205 08:14:38.792710 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:38 crc kubenswrapper[4780]: I1205 08:14:38.792773 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55486c8ff8-8th4d" event={"ID":"e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e","Type":"ContainerStarted","Data":"e3586025ba5c0f27870a207fd1114dc100ca291b3a9a78519a54a969c6397811"} Dec 05 08:14:38 crc kubenswrapper[4780]: I1205 08:14:38.792834 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:38 crc kubenswrapper[4780]: I1205 08:14:38.793696 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b5984894b-tnszk" event={"ID":"40e45167-1f28-490f-aa73-35137f2d0f1a","Type":"ContainerStarted","Data":"74ecd15eff7aae0f3d0c0418ee9ed77a318a03f7fd48519b97c55af3f2223a75"} Dec 05 08:14:39 crc kubenswrapper[4780]: I1205 08:14:39.139409 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:14:39 crc kubenswrapper[4780]: E1205 08:14:39.139970 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:14:45 crc kubenswrapper[4780]: I1205 08:14:45.446703 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:14:45 crc kubenswrapper[4780]: I1205 08:14:45.473067 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55486c8ff8-8th4d" podStartSLOduration=8.473042758 podStartE2EDuration="8.473042758s" podCreationTimestamp="2025-12-05 08:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:14:38.822128856 +0000 UTC m=+5312.891645188" watchObservedRunningTime="2025-12-05 08:14:45.473042758 +0000 UTC m=+5319.542559110" Dec 05 08:14:45 crc kubenswrapper[4780]: I1205 08:14:45.531182 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b9996877c-5tpmc"] Dec 05 08:14:45 crc kubenswrapper[4780]: I1205 08:14:45.531401 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" podUID="7ca16f1c-cae0-481e-b008-973becf7fc55" containerName="dnsmasq-dns" containerID="cri-o://03e3623c90726183dc6a8ec9d5cd3accb786ce3f2d84401fdd2fa8d72078270e" gracePeriod=10 Dec 05 08:14:45 crc kubenswrapper[4780]: I1205 08:14:45.860986 4780 generic.go:334] "Generic (PLEG): container finished" podID="7ca16f1c-cae0-481e-b008-973becf7fc55" containerID="03e3623c90726183dc6a8ec9d5cd3accb786ce3f2d84401fdd2fa8d72078270e" exitCode=0 Dec 05 08:14:45 crc kubenswrapper[4780]: I1205 08:14:45.861071 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" event={"ID":"7ca16f1c-cae0-481e-b008-973becf7fc55","Type":"ContainerDied","Data":"03e3623c90726183dc6a8ec9d5cd3accb786ce3f2d84401fdd2fa8d72078270e"} Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.106082 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.211567 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-config\") pod \"7ca16f1c-cae0-481e-b008-973becf7fc55\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.211673 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-dns-svc\") pod \"7ca16f1c-cae0-481e-b008-973becf7fc55\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.211744 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-nb\") pod \"7ca16f1c-cae0-481e-b008-973becf7fc55\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.211803 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-sb\") pod \"7ca16f1c-cae0-481e-b008-973becf7fc55\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.211845 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7fdv\" (UniqueName: \"kubernetes.io/projected/7ca16f1c-cae0-481e-b008-973becf7fc55-kube-api-access-j7fdv\") pod \"7ca16f1c-cae0-481e-b008-973becf7fc55\" (UID: \"7ca16f1c-cae0-481e-b008-973becf7fc55\") " Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.217788 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca16f1c-cae0-481e-b008-973becf7fc55-kube-api-access-j7fdv" (OuterVolumeSpecName: "kube-api-access-j7fdv") pod "7ca16f1c-cae0-481e-b008-973becf7fc55" (UID: "7ca16f1c-cae0-481e-b008-973becf7fc55"). InnerVolumeSpecName "kube-api-access-j7fdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.257576 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ca16f1c-cae0-481e-b008-973becf7fc55" (UID: "7ca16f1c-cae0-481e-b008-973becf7fc55"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.258732 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ca16f1c-cae0-481e-b008-973becf7fc55" (UID: "7ca16f1c-cae0-481e-b008-973becf7fc55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.260835 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ca16f1c-cae0-481e-b008-973becf7fc55" (UID: "7ca16f1c-cae0-481e-b008-973becf7fc55"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.262424 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-config" (OuterVolumeSpecName: "config") pod "7ca16f1c-cae0-481e-b008-973becf7fc55" (UID: "7ca16f1c-cae0-481e-b008-973becf7fc55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.314292 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.314326 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.314335 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7fdv\" (UniqueName: \"kubernetes.io/projected/7ca16f1c-cae0-481e-b008-973becf7fc55-kube-api-access-j7fdv\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.314346 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.314355 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca16f1c-cae0-481e-b008-973becf7fc55-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.871098 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" event={"ID":"7ca16f1c-cae0-481e-b008-973becf7fc55","Type":"ContainerDied","Data":"07a99706da1ab2c1ace5de5449add68fb2f4b97d49afa7bb35dff02ec9ed28d1"} Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.871153 4780 scope.go:117] "RemoveContainer" containerID="03e3623c90726183dc6a8ec9d5cd3accb786ce3f2d84401fdd2fa8d72078270e" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.871177 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9996877c-5tpmc" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.897980 4780 scope.go:117] "RemoveContainer" containerID="85d440db25cf8567a031874d404f988a05477c2e5c5eab2382fb27fbdb895283" Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.901798 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b9996877c-5tpmc"] Dec 05 08:14:46 crc kubenswrapper[4780]: I1205 08:14:46.909199 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b9996877c-5tpmc"] Dec 05 08:14:47 crc kubenswrapper[4780]: I1205 08:14:47.135425 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:47 crc kubenswrapper[4780]: I1205 08:14:47.376127 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:48 crc kubenswrapper[4780]: I1205 08:14:48.170170 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca16f1c-cae0-481e-b008-973becf7fc55" path="/var/lib/kubelet/pods/7ca16f1c-cae0-481e-b008-973becf7fc55/volumes" Dec 05 08:14:49 crc kubenswrapper[4780]: I1205 08:14:49.268936 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:49 crc kubenswrapper[4780]: I1205 08:14:49.349826 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55486c8ff8-8th4d" Dec 05 08:14:49 crc kubenswrapper[4780]: I1205 08:14:49.412543 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6df6665768-zr7sg"] Dec 05 08:14:49 crc kubenswrapper[4780]: I1205 08:14:49.412778 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6df6665768-zr7sg" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerName="barbican-api-log" containerID="cri-o://7d7defa5f0b96f025b86f67bc0e1d26b2570fc0df74d5ad63367066ce374305d" gracePeriod=30 Dec 05 08:14:49 crc kubenswrapper[4780]: I1205 08:14:49.413241 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6df6665768-zr7sg" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerName="barbican-api" containerID="cri-o://697df2e2fbccdaee7cc5a6262c2dacef5ff13e09b9516ecb857ee16a856b483f" gracePeriod=30 Dec 05 08:14:49 crc kubenswrapper[4780]: I1205 08:14:49.923228 4780 generic.go:334] "Generic (PLEG): container finished" podID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerID="7d7defa5f0b96f025b86f67bc0e1d26b2570fc0df74d5ad63367066ce374305d" exitCode=143 Dec 05 08:14:49 crc kubenswrapper[4780]: I1205 08:14:49.923312 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6df6665768-zr7sg" event={"ID":"12316d0f-f90c-49f2-a30f-9fc52b91714e","Type":"ContainerDied","Data":"7d7defa5f0b96f025b86f67bc0e1d26b2570fc0df74d5ad63367066ce374305d"} Dec 05 08:14:52 crc kubenswrapper[4780]: I1205 08:14:52.138922 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:14:52 crc kubenswrapper[4780]: E1205 08:14:52.139486 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:14:52 crc kubenswrapper[4780]: I1205 08:14:52.605735 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6df6665768-zr7sg" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": read tcp 10.217.0.2:36006->10.217.1.30:9311: read: connection reset by peer" Dec 05 08:14:52 crc kubenswrapper[4780]: I1205 08:14:52.605794 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6df6665768-zr7sg" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.30:9311/healthcheck\": read tcp 10.217.0.2:36004->10.217.1.30:9311: read: connection reset by peer" Dec 05 08:14:52 crc kubenswrapper[4780]: I1205 08:14:52.951386 4780 generic.go:334] "Generic (PLEG): container finished" podID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerID="697df2e2fbccdaee7cc5a6262c2dacef5ff13e09b9516ecb857ee16a856b483f" exitCode=0 Dec 05 08:14:52 crc kubenswrapper[4780]: I1205 08:14:52.951478 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6df6665768-zr7sg" event={"ID":"12316d0f-f90c-49f2-a30f-9fc52b91714e","Type":"ContainerDied","Data":"697df2e2fbccdaee7cc5a6262c2dacef5ff13e09b9516ecb857ee16a856b483f"} Dec 05 08:14:52 crc kubenswrapper[4780]: I1205 08:14:52.951698 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6df6665768-zr7sg" event={"ID":"12316d0f-f90c-49f2-a30f-9fc52b91714e","Type":"ContainerDied","Data":"e87f2dd1f8a1d9d9cee940e22a7127d7a77d32a3d52ec68ca51b222218c408dc"} Dec 05 08:14:52 crc kubenswrapper[4780]: I1205 08:14:52.951714 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e87f2dd1f8a1d9d9cee940e22a7127d7a77d32a3d52ec68ca51b222218c408dc" Dec 05 08:14:52 crc kubenswrapper[4780]: I1205 08:14:52.969441 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.069849 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data\") pod \"12316d0f-f90c-49f2-a30f-9fc52b91714e\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.069909 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msxj8\" (UniqueName: \"kubernetes.io/projected/12316d0f-f90c-49f2-a30f-9fc52b91714e-kube-api-access-msxj8\") pod \"12316d0f-f90c-49f2-a30f-9fc52b91714e\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.069928 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data-custom\") pod \"12316d0f-f90c-49f2-a30f-9fc52b91714e\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.069997 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12316d0f-f90c-49f2-a30f-9fc52b91714e-logs\") pod \"12316d0f-f90c-49f2-a30f-9fc52b91714e\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.070017 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-combined-ca-bundle\") pod \"12316d0f-f90c-49f2-a30f-9fc52b91714e\" (UID: \"12316d0f-f90c-49f2-a30f-9fc52b91714e\") " Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.071015 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12316d0f-f90c-49f2-a30f-9fc52b91714e-logs" (OuterVolumeSpecName: "logs") pod "12316d0f-f90c-49f2-a30f-9fc52b91714e" (UID: "12316d0f-f90c-49f2-a30f-9fc52b91714e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.074731 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "12316d0f-f90c-49f2-a30f-9fc52b91714e" (UID: "12316d0f-f90c-49f2-a30f-9fc52b91714e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.074848 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12316d0f-f90c-49f2-a30f-9fc52b91714e-kube-api-access-msxj8" (OuterVolumeSpecName: "kube-api-access-msxj8") pod "12316d0f-f90c-49f2-a30f-9fc52b91714e" (UID: "12316d0f-f90c-49f2-a30f-9fc52b91714e"). InnerVolumeSpecName "kube-api-access-msxj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.095029 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12316d0f-f90c-49f2-a30f-9fc52b91714e" (UID: "12316d0f-f90c-49f2-a30f-9fc52b91714e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.120032 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data" (OuterVolumeSpecName: "config-data") pod "12316d0f-f90c-49f2-a30f-9fc52b91714e" (UID: "12316d0f-f90c-49f2-a30f-9fc52b91714e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.172641 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msxj8\" (UniqueName: \"kubernetes.io/projected/12316d0f-f90c-49f2-a30f-9fc52b91714e-kube-api-access-msxj8\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.172672 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.172683 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12316d0f-f90c-49f2-a30f-9fc52b91714e-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.172692 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.172700 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12316d0f-f90c-49f2-a30f-9fc52b91714e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.958597 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6df6665768-zr7sg" Dec 05 08:14:53 crc kubenswrapper[4780]: I1205 08:14:53.994917 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6df6665768-zr7sg"] Dec 05 08:14:54 crc kubenswrapper[4780]: I1205 08:14:54.001809 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6df6665768-zr7sg"] Dec 05 08:14:54 crc kubenswrapper[4780]: I1205 08:14:54.171289 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" path="/var/lib/kubelet/pods/12316d0f-f90c-49f2-a30f-9fc52b91714e/volumes" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.137082 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd"] Dec 05 08:15:00 crc kubenswrapper[4780]: E1205 08:15:00.139808 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerName="barbican-api" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.139829 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerName="barbican-api" Dec 05 08:15:00 crc kubenswrapper[4780]: E1205 08:15:00.139852 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerName="barbican-api-log" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.139859 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerName="barbican-api-log" Dec 05 08:15:00 crc kubenswrapper[4780]: E1205 08:15:00.139872 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca16f1c-cae0-481e-b008-973becf7fc55" containerName="dnsmasq-dns" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.139899 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca16f1c-cae0-481e-b008-973becf7fc55" containerName="dnsmasq-dns" Dec 05 08:15:00 crc kubenswrapper[4780]: E1205 08:15:00.139917 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca16f1c-cae0-481e-b008-973becf7fc55" containerName="init" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.139924 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca16f1c-cae0-481e-b008-973becf7fc55" containerName="init" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.140128 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerName="barbican-api" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.140147 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca16f1c-cae0-481e-b008-973becf7fc55" containerName="dnsmasq-dns" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.140170 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="12316d0f-f90c-49f2-a30f-9fc52b91714e" containerName="barbican-api-log" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.140963 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.143859 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.144518 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.156232 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd"] Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.190426 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13ae8a90-29c1-4480-ad3f-732a13612443-secret-volume\") pod \"collect-profiles-29415375-2vbrd\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.191091 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9mrx\" (UniqueName: \"kubernetes.io/projected/13ae8a90-29c1-4480-ad3f-732a13612443-kube-api-access-l9mrx\") pod \"collect-profiles-29415375-2vbrd\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.191204 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13ae8a90-29c1-4480-ad3f-732a13612443-config-volume\") pod \"collect-profiles-29415375-2vbrd\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.293566 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9mrx\" (UniqueName: \"kubernetes.io/projected/13ae8a90-29c1-4480-ad3f-732a13612443-kube-api-access-l9mrx\") pod \"collect-profiles-29415375-2vbrd\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.293622 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13ae8a90-29c1-4480-ad3f-732a13612443-config-volume\") pod \"collect-profiles-29415375-2vbrd\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.293658 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13ae8a90-29c1-4480-ad3f-732a13612443-secret-volume\") pod \"collect-profiles-29415375-2vbrd\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.294861 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13ae8a90-29c1-4480-ad3f-732a13612443-config-volume\") pod \"collect-profiles-29415375-2vbrd\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.300626 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13ae8a90-29c1-4480-ad3f-732a13612443-secret-volume\") pod \"collect-profiles-29415375-2vbrd\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.318589 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9mrx\" (UniqueName: \"kubernetes.io/projected/13ae8a90-29c1-4480-ad3f-732a13612443-kube-api-access-l9mrx\") pod \"collect-profiles-29415375-2vbrd\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.472205 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:00 crc kubenswrapper[4780]: I1205 08:15:00.914676 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd"] Dec 05 08:15:01 crc kubenswrapper[4780]: I1205 08:15:01.013385 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" event={"ID":"13ae8a90-29c1-4480-ad3f-732a13612443","Type":"ContainerStarted","Data":"adef587ee1062b6887c821dc5329c5d881882e3f3c4c259f8ab5beda67669f82"} Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.022550 4780 generic.go:334] "Generic (PLEG): container finished" podID="13ae8a90-29c1-4480-ad3f-732a13612443" containerID="bbded386c637973b1313c0218620a8e2d137c57d7573ad1a375e78f0d02ac279" exitCode=0 Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.022748 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" event={"ID":"13ae8a90-29c1-4480-ad3f-732a13612443","Type":"ContainerDied","Data":"bbded386c637973b1313c0218620a8e2d137c57d7573ad1a375e78f0d02ac279"} Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.565215 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mtwcr"] Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.566656 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mtwcr" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.572779 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mtwcr"] Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.635192 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfv5z\" (UniqueName: \"kubernetes.io/projected/d7eff8c1-ed87-418e-86b8-86f37004a4ef-kube-api-access-tfv5z\") pod \"neutron-db-create-mtwcr\" (UID: \"d7eff8c1-ed87-418e-86b8-86f37004a4ef\") " pod="openstack/neutron-db-create-mtwcr" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.635547 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7eff8c1-ed87-418e-86b8-86f37004a4ef-operator-scripts\") pod \"neutron-db-create-mtwcr\" (UID: \"d7eff8c1-ed87-418e-86b8-86f37004a4ef\") " pod="openstack/neutron-db-create-mtwcr" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.678333 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b3b-account-create-update-sphfv"] Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.679611 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b3b-account-create-update-sphfv" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.682198 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.695459 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b3b-account-create-update-sphfv"] Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.736757 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfv5z\" (UniqueName: \"kubernetes.io/projected/d7eff8c1-ed87-418e-86b8-86f37004a4ef-kube-api-access-tfv5z\") pod \"neutron-db-create-mtwcr\" (UID: \"d7eff8c1-ed87-418e-86b8-86f37004a4ef\") " pod="openstack/neutron-db-create-mtwcr" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.736822 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfjk\" (UniqueName: \"kubernetes.io/projected/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-kube-api-access-hvfjk\") pod \"neutron-7b3b-account-create-update-sphfv\" (UID: \"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c\") " pod="openstack/neutron-7b3b-account-create-update-sphfv" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.736901 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7eff8c1-ed87-418e-86b8-86f37004a4ef-operator-scripts\") pod \"neutron-db-create-mtwcr\" (UID: \"d7eff8c1-ed87-418e-86b8-86f37004a4ef\") " pod="openstack/neutron-db-create-mtwcr" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.736966 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-operator-scripts\") pod \"neutron-7b3b-account-create-update-sphfv\" (UID: \"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c\") " pod="openstack/neutron-7b3b-account-create-update-sphfv" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.737643 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7eff8c1-ed87-418e-86b8-86f37004a4ef-operator-scripts\") pod \"neutron-db-create-mtwcr\" (UID: \"d7eff8c1-ed87-418e-86b8-86f37004a4ef\") " pod="openstack/neutron-db-create-mtwcr" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.755349 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfv5z\" (UniqueName: \"kubernetes.io/projected/d7eff8c1-ed87-418e-86b8-86f37004a4ef-kube-api-access-tfv5z\") pod \"neutron-db-create-mtwcr\" (UID: \"d7eff8c1-ed87-418e-86b8-86f37004a4ef\") " pod="openstack/neutron-db-create-mtwcr" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.838276 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-operator-scripts\") pod \"neutron-7b3b-account-create-update-sphfv\" (UID: \"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c\") " pod="openstack/neutron-7b3b-account-create-update-sphfv" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.838401 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvfjk\" (UniqueName: \"kubernetes.io/projected/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-kube-api-access-hvfjk\") pod \"neutron-7b3b-account-create-update-sphfv\" (UID: \"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c\") " pod="openstack/neutron-7b3b-account-create-update-sphfv" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.839355 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-operator-scripts\") pod \"neutron-7b3b-account-create-update-sphfv\" (UID: \"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c\") " pod="openstack/neutron-7b3b-account-create-update-sphfv" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.853939 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvfjk\" (UniqueName: \"kubernetes.io/projected/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-kube-api-access-hvfjk\") pod \"neutron-7b3b-account-create-update-sphfv\" (UID: \"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c\") " pod="openstack/neutron-7b3b-account-create-update-sphfv" Dec 05 08:15:02 crc kubenswrapper[4780]: I1205 08:15:02.892627 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mtwcr" Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.005315 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b3b-account-create-update-sphfv" Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.339084 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.358995 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mtwcr"] Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.448242 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13ae8a90-29c1-4480-ad3f-732a13612443-secret-volume\") pod \"13ae8a90-29c1-4480-ad3f-732a13612443\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.448320 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9mrx\" (UniqueName: \"kubernetes.io/projected/13ae8a90-29c1-4480-ad3f-732a13612443-kube-api-access-l9mrx\") pod \"13ae8a90-29c1-4480-ad3f-732a13612443\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.448370 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13ae8a90-29c1-4480-ad3f-732a13612443-config-volume\") pod \"13ae8a90-29c1-4480-ad3f-732a13612443\" (UID: \"13ae8a90-29c1-4480-ad3f-732a13612443\") " Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.449863 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ae8a90-29c1-4480-ad3f-732a13612443-config-volume" (OuterVolumeSpecName: "config-volume") pod "13ae8a90-29c1-4480-ad3f-732a13612443" (UID: "13ae8a90-29c1-4480-ad3f-732a13612443"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.454125 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ae8a90-29c1-4480-ad3f-732a13612443-kube-api-access-l9mrx" (OuterVolumeSpecName: "kube-api-access-l9mrx") pod "13ae8a90-29c1-4480-ad3f-732a13612443" (UID: "13ae8a90-29c1-4480-ad3f-732a13612443"). InnerVolumeSpecName "kube-api-access-l9mrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.454261 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ae8a90-29c1-4480-ad3f-732a13612443-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "13ae8a90-29c1-4480-ad3f-732a13612443" (UID: "13ae8a90-29c1-4480-ad3f-732a13612443"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.534899 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b3b-account-create-update-sphfv"] Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.552272 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/13ae8a90-29c1-4480-ad3f-732a13612443-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.552509 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9mrx\" (UniqueName: \"kubernetes.io/projected/13ae8a90-29c1-4480-ad3f-732a13612443-kube-api-access-l9mrx\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:03 crc kubenswrapper[4780]: I1205 08:15:03.552522 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13ae8a90-29c1-4480-ad3f-732a13612443-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:03 crc kubenswrapper[4780]: E1205 08:15:03.758043 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7eff8c1_ed87_418e_86b8_86f37004a4ef.slice/crio-9efe5c5171261230c6db5a9ffa7cee26d391fbd6ec29600a264fbe9ffe9a0dcd.scope\": RecentStats: unable to find data in memory cache]" Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.044261 4780 generic.go:334] "Generic (PLEG): container finished" podID="c8b8b038-0acd-4cf5-bc43-4f36c4577b7c" containerID="7fd05480d6762d8769840effa3a0fa102bb6e34ee4f083c54e3d5b46577d207d" exitCode=0 Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.044317 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b3b-account-create-update-sphfv" event={"ID":"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c","Type":"ContainerDied","Data":"7fd05480d6762d8769840effa3a0fa102bb6e34ee4f083c54e3d5b46577d207d"} Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.044340 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b3b-account-create-update-sphfv" event={"ID":"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c","Type":"ContainerStarted","Data":"7d66fb49ecd5937436cea074d90c69adf96e9b5e1c397490854e6aa367d3d2f3"} Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.046430 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" event={"ID":"13ae8a90-29c1-4480-ad3f-732a13612443","Type":"ContainerDied","Data":"adef587ee1062b6887c821dc5329c5d881882e3f3c4c259f8ab5beda67669f82"} Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.046447 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd" Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.046497 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adef587ee1062b6887c821dc5329c5d881882e3f3c4c259f8ab5beda67669f82" Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.047963 4780 generic.go:334] "Generic (PLEG): container finished" podID="d7eff8c1-ed87-418e-86b8-86f37004a4ef" containerID="9efe5c5171261230c6db5a9ffa7cee26d391fbd6ec29600a264fbe9ffe9a0dcd" exitCode=0 Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.047999 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mtwcr" event={"ID":"d7eff8c1-ed87-418e-86b8-86f37004a4ef","Type":"ContainerDied","Data":"9efe5c5171261230c6db5a9ffa7cee26d391fbd6ec29600a264fbe9ffe9a0dcd"} Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.048014 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mtwcr" event={"ID":"d7eff8c1-ed87-418e-86b8-86f37004a4ef","Type":"ContainerStarted","Data":"e9745c14bd5f9d712084256e3f6fe964f13c6d49cb72f388c922b07e2e963fc0"} Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.433036 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7"] Dec 05 08:15:04 crc kubenswrapper[4780]: I1205 08:15:04.441287 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415330-4rgw7"] Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.465237 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mtwcr" Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.473985 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b3b-account-create-update-sphfv" Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.585489 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7eff8c1-ed87-418e-86b8-86f37004a4ef-operator-scripts\") pod \"d7eff8c1-ed87-418e-86b8-86f37004a4ef\" (UID: \"d7eff8c1-ed87-418e-86b8-86f37004a4ef\") " Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.585677 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvfjk\" (UniqueName: \"kubernetes.io/projected/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-kube-api-access-hvfjk\") pod \"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c\" (UID: \"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c\") " Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.585713 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-operator-scripts\") pod \"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c\" (UID: \"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c\") " Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.585732 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfv5z\" (UniqueName: \"kubernetes.io/projected/d7eff8c1-ed87-418e-86b8-86f37004a4ef-kube-api-access-tfv5z\") pod \"d7eff8c1-ed87-418e-86b8-86f37004a4ef\" (UID: \"d7eff8c1-ed87-418e-86b8-86f37004a4ef\") " Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.586076 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7eff8c1-ed87-418e-86b8-86f37004a4ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7eff8c1-ed87-418e-86b8-86f37004a4ef" (UID: "d7eff8c1-ed87-418e-86b8-86f37004a4ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.586329 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8b8b038-0acd-4cf5-bc43-4f36c4577b7c" (UID: "c8b8b038-0acd-4cf5-bc43-4f36c4577b7c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.591864 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-kube-api-access-hvfjk" (OuterVolumeSpecName: "kube-api-access-hvfjk") pod "c8b8b038-0acd-4cf5-bc43-4f36c4577b7c" (UID: "c8b8b038-0acd-4cf5-bc43-4f36c4577b7c"). InnerVolumeSpecName "kube-api-access-hvfjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.591925 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7eff8c1-ed87-418e-86b8-86f37004a4ef-kube-api-access-tfv5z" (OuterVolumeSpecName: "kube-api-access-tfv5z") pod "d7eff8c1-ed87-418e-86b8-86f37004a4ef" (UID: "d7eff8c1-ed87-418e-86b8-86f37004a4ef"). InnerVolumeSpecName "kube-api-access-tfv5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.687442 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7eff8c1-ed87-418e-86b8-86f37004a4ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.687480 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvfjk\" (UniqueName: \"kubernetes.io/projected/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-kube-api-access-hvfjk\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.687490 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:05 crc kubenswrapper[4780]: I1205 08:15:05.687499 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfv5z\" (UniqueName: \"kubernetes.io/projected/d7eff8c1-ed87-418e-86b8-86f37004a4ef-kube-api-access-tfv5z\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:06 crc kubenswrapper[4780]: I1205 08:15:06.065766 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mtwcr" event={"ID":"d7eff8c1-ed87-418e-86b8-86f37004a4ef","Type":"ContainerDied","Data":"e9745c14bd5f9d712084256e3f6fe964f13c6d49cb72f388c922b07e2e963fc0"} Dec 05 08:15:06 crc kubenswrapper[4780]: I1205 08:15:06.065811 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9745c14bd5f9d712084256e3f6fe964f13c6d49cb72f388c922b07e2e963fc0" Dec 05 08:15:06 crc kubenswrapper[4780]: I1205 08:15:06.065836 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mtwcr" Dec 05 08:15:06 crc kubenswrapper[4780]: I1205 08:15:06.067288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b3b-account-create-update-sphfv" event={"ID":"c8b8b038-0acd-4cf5-bc43-4f36c4577b7c","Type":"ContainerDied","Data":"7d66fb49ecd5937436cea074d90c69adf96e9b5e1c397490854e6aa367d3d2f3"} Dec 05 08:15:06 crc kubenswrapper[4780]: I1205 08:15:06.067329 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d66fb49ecd5937436cea074d90c69adf96e9b5e1c397490854e6aa367d3d2f3" Dec 05 08:15:06 crc kubenswrapper[4780]: I1205 08:15:06.067380 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b3b-account-create-update-sphfv" Dec 05 08:15:06 crc kubenswrapper[4780]: I1205 08:15:06.148343 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d66a54-fe88-4e26-a374-70df7e86c9ea" path="/var/lib/kubelet/pods/a6d66a54-fe88-4e26-a374-70df7e86c9ea/volumes" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.139728 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:15:07 crc kubenswrapper[4780]: E1205 08:15:07.140006 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.884613 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lmzsz"] Dec 05 08:15:07 crc kubenswrapper[4780]: E1205 08:15:07.885008 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ae8a90-29c1-4480-ad3f-732a13612443" containerName="collect-profiles" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.885026 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ae8a90-29c1-4480-ad3f-732a13612443" containerName="collect-profiles" Dec 05 08:15:07 crc kubenswrapper[4780]: E1205 08:15:07.885050 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b8b038-0acd-4cf5-bc43-4f36c4577b7c" containerName="mariadb-account-create-update" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.885057 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b8b038-0acd-4cf5-bc43-4f36c4577b7c" containerName="mariadb-account-create-update" Dec 05 08:15:07 crc kubenswrapper[4780]: E1205 08:15:07.885076 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7eff8c1-ed87-418e-86b8-86f37004a4ef" containerName="mariadb-database-create" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.885085 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7eff8c1-ed87-418e-86b8-86f37004a4ef" containerName="mariadb-database-create" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.885278 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ae8a90-29c1-4480-ad3f-732a13612443" containerName="collect-profiles" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.885303 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7eff8c1-ed87-418e-86b8-86f37004a4ef" containerName="mariadb-database-create" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.885314 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b8b038-0acd-4cf5-bc43-4f36c4577b7c" containerName="mariadb-account-create-update" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.886003 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.888713 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.888868 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-777j2" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.889126 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.901638 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lmzsz"] Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.929708 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-combined-ca-bundle\") pod \"neutron-db-sync-lmzsz\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.929770 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8swd\" (UniqueName: \"kubernetes.io/projected/5f776a1f-03be-4240-8169-09cc7aaf98f7-kube-api-access-b8swd\") pod \"neutron-db-sync-lmzsz\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:07 crc kubenswrapper[4780]: I1205 08:15:07.929797 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-config\") pod \"neutron-db-sync-lmzsz\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:08 crc kubenswrapper[4780]: I1205 08:15:08.030946 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-combined-ca-bundle\") pod \"neutron-db-sync-lmzsz\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:08 crc kubenswrapper[4780]: I1205 08:15:08.031048 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8swd\" (UniqueName: \"kubernetes.io/projected/5f776a1f-03be-4240-8169-09cc7aaf98f7-kube-api-access-b8swd\") pod \"neutron-db-sync-lmzsz\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:08 crc kubenswrapper[4780]: I1205 08:15:08.031074 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-config\") pod \"neutron-db-sync-lmzsz\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:08 crc kubenswrapper[4780]: I1205 08:15:08.036297 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-combined-ca-bundle\") pod \"neutron-db-sync-lmzsz\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:08 crc kubenswrapper[4780]: I1205 08:15:08.043771 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-config\") pod \"neutron-db-sync-lmzsz\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:08 crc kubenswrapper[4780]: I1205 08:15:08.047511 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8swd\" (UniqueName: \"kubernetes.io/projected/5f776a1f-03be-4240-8169-09cc7aaf98f7-kube-api-access-b8swd\") pod \"neutron-db-sync-lmzsz\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:08 crc kubenswrapper[4780]: I1205 08:15:08.202613 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:08 crc kubenswrapper[4780]: I1205 08:15:08.448754 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lmzsz"] Dec 05 08:15:08 crc kubenswrapper[4780]: W1205 08:15:08.453736 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f776a1f_03be_4240_8169_09cc7aaf98f7.slice/crio-5c59269be74a65989be557151b7403af15c1af4c0f0ad42322ec356c01062f46 WatchSource:0}: Error finding container 5c59269be74a65989be557151b7403af15c1af4c0f0ad42322ec356c01062f46: Status 404 returned error can't find the container with id 5c59269be74a65989be557151b7403af15c1af4c0f0ad42322ec356c01062f46 Dec 05 08:15:09 crc kubenswrapper[4780]: I1205 08:15:09.089356 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lmzsz" event={"ID":"5f776a1f-03be-4240-8169-09cc7aaf98f7","Type":"ContainerStarted","Data":"ff58aeaac57a573ee7aef08ed754f87e2359dc26f44625e37e8b4c252c8002c8"} Dec 05 08:15:09 crc kubenswrapper[4780]: I1205 08:15:09.089397 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lmzsz" event={"ID":"5f776a1f-03be-4240-8169-09cc7aaf98f7","Type":"ContainerStarted","Data":"5c59269be74a65989be557151b7403af15c1af4c0f0ad42322ec356c01062f46"} Dec 05 08:15:09 crc kubenswrapper[4780]: I1205 08:15:09.111687 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lmzsz" podStartSLOduration=2.111662473 podStartE2EDuration="2.111662473s" podCreationTimestamp="2025-12-05 08:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:15:09.101458195 +0000 UTC m=+5343.170974527" watchObservedRunningTime="2025-12-05 08:15:09.111662473 +0000 UTC m=+5343.181178805" Dec 05 08:15:13 crc kubenswrapper[4780]: I1205 08:15:13.131160 4780 generic.go:334] "Generic (PLEG): container finished" podID="5f776a1f-03be-4240-8169-09cc7aaf98f7" containerID="ff58aeaac57a573ee7aef08ed754f87e2359dc26f44625e37e8b4c252c8002c8" exitCode=0 Dec 05 08:15:13 crc kubenswrapper[4780]: I1205 08:15:13.131289 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lmzsz" event={"ID":"5f776a1f-03be-4240-8169-09cc7aaf98f7","Type":"ContainerDied","Data":"ff58aeaac57a573ee7aef08ed754f87e2359dc26f44625e37e8b4c252c8002c8"} Dec 05 08:15:14 crc kubenswrapper[4780]: I1205 08:15:14.525992 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:14 crc kubenswrapper[4780]: I1205 08:15:14.680424 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8swd\" (UniqueName: \"kubernetes.io/projected/5f776a1f-03be-4240-8169-09cc7aaf98f7-kube-api-access-b8swd\") pod \"5f776a1f-03be-4240-8169-09cc7aaf98f7\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " Dec 05 08:15:14 crc kubenswrapper[4780]: I1205 08:15:14.680513 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-config\") pod \"5f776a1f-03be-4240-8169-09cc7aaf98f7\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " Dec 05 08:15:14 crc kubenswrapper[4780]: I1205 08:15:14.680543 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-combined-ca-bundle\") pod \"5f776a1f-03be-4240-8169-09cc7aaf98f7\" (UID: \"5f776a1f-03be-4240-8169-09cc7aaf98f7\") " Dec 05 08:15:14 crc kubenswrapper[4780]: I1205 08:15:14.693168 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f776a1f-03be-4240-8169-09cc7aaf98f7-kube-api-access-b8swd" (OuterVolumeSpecName: "kube-api-access-b8swd") pod "5f776a1f-03be-4240-8169-09cc7aaf98f7" (UID: "5f776a1f-03be-4240-8169-09cc7aaf98f7"). InnerVolumeSpecName "kube-api-access-b8swd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:15:14 crc kubenswrapper[4780]: I1205 08:15:14.720928 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-config" (OuterVolumeSpecName: "config") pod "5f776a1f-03be-4240-8169-09cc7aaf98f7" (UID: "5f776a1f-03be-4240-8169-09cc7aaf98f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:15:14 crc kubenswrapper[4780]: I1205 08:15:14.734375 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f776a1f-03be-4240-8169-09cc7aaf98f7" (UID: "5f776a1f-03be-4240-8169-09cc7aaf98f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:15:14 crc kubenswrapper[4780]: I1205 08:15:14.786131 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8swd\" (UniqueName: \"kubernetes.io/projected/5f776a1f-03be-4240-8169-09cc7aaf98f7-kube-api-access-b8swd\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:14 crc kubenswrapper[4780]: I1205 08:15:14.786175 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:14 crc kubenswrapper[4780]: I1205 08:15:14.786189 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f776a1f-03be-4240-8169-09cc7aaf98f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.151140 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lmzsz" event={"ID":"5f776a1f-03be-4240-8169-09cc7aaf98f7","Type":"ContainerDied","Data":"5c59269be74a65989be557151b7403af15c1af4c0f0ad42322ec356c01062f46"} Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.151204 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c59269be74a65989be557151b7403af15c1af4c0f0ad42322ec356c01062f46" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.151215 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lmzsz" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.315279 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fccdb759-pdxg8"] Dec 05 08:15:15 crc kubenswrapper[4780]: E1205 08:15:15.315652 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f776a1f-03be-4240-8169-09cc7aaf98f7" containerName="neutron-db-sync" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.315671 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f776a1f-03be-4240-8169-09cc7aaf98f7" containerName="neutron-db-sync" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.315920 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f776a1f-03be-4240-8169-09cc7aaf98f7" containerName="neutron-db-sync" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.316916 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.340633 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fccdb759-pdxg8"] Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.471435 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57f6d746c6-t5ttl"] Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.473194 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.476510 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.476729 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.476730 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.476821 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-777j2" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.492211 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57f6d746c6-t5ttl"] Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.501692 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-nb\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.501747 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-sb\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.501820 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mdf6\" (UniqueName: \"kubernetes.io/projected/f03c71d0-0aaf-4433-b375-2baefd7abdb8-kube-api-access-2mdf6\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.503795 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-dns-svc\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.503864 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-config\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.605581 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-nb\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.605634 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-sb\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.605679 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mdf6\" (UniqueName: \"kubernetes.io/projected/f03c71d0-0aaf-4433-b375-2baefd7abdb8-kube-api-access-2mdf6\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.605705 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-combined-ca-bundle\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.605729 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb7hd\" (UniqueName: \"kubernetes.io/projected/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-kube-api-access-qb7hd\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.605774 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-dns-svc\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.606436 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-config\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.606532 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-config\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.606598 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-httpd-config\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.606671 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-ovndb-tls-certs\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.607019 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-sb\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.607179 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-dns-svc\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.607416 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-config\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.607566 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-nb\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.630368 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mdf6\" (UniqueName: \"kubernetes.io/projected/f03c71d0-0aaf-4433-b375-2baefd7abdb8-kube-api-access-2mdf6\") pod \"dnsmasq-dns-55fccdb759-pdxg8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.654089 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.708547 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-combined-ca-bundle\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.708987 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb7hd\" (UniqueName: \"kubernetes.io/projected/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-kube-api-access-qb7hd\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.709054 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-config\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.709083 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-httpd-config\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.709112 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-ovndb-tls-certs\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.725145 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-ovndb-tls-certs\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.725723 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-httpd-config\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.727495 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-combined-ca-bundle\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.732832 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-config\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.767630 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb7hd\" (UniqueName: \"kubernetes.io/projected/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-kube-api-access-qb7hd\") pod \"neutron-57f6d746c6-t5ttl\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:15 crc kubenswrapper[4780]: I1205 08:15:15.793080 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:16 crc kubenswrapper[4780]: I1205 08:15:16.128010 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fccdb759-pdxg8"] Dec 05 08:15:16 crc kubenswrapper[4780]: I1205 08:15:16.171036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" event={"ID":"f03c71d0-0aaf-4433-b375-2baefd7abdb8","Type":"ContainerStarted","Data":"c016e25c7574dd95207417d5107ead0a6100365d4706f78e115ea12d8df055d5"} Dec 05 08:15:16 crc kubenswrapper[4780]: I1205 08:15:16.377648 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57f6d746c6-t5ttl"] Dec 05 08:15:17 crc kubenswrapper[4780]: I1205 08:15:17.203654 4780 generic.go:334] "Generic (PLEG): container finished" podID="f03c71d0-0aaf-4433-b375-2baefd7abdb8" containerID="fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e" exitCode=0 Dec 05 08:15:17 crc kubenswrapper[4780]: I1205 08:15:17.203708 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" event={"ID":"f03c71d0-0aaf-4433-b375-2baefd7abdb8","Type":"ContainerDied","Data":"fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e"} Dec 05 08:15:17 crc kubenswrapper[4780]: I1205 08:15:17.210063 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57f6d746c6-t5ttl" event={"ID":"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3","Type":"ContainerStarted","Data":"6cc68c21af799afa8290da5e5f229cc099951f4191cb5d6187199ce97d3780a2"} Dec 05 08:15:17 crc kubenswrapper[4780]: I1205 08:15:17.210129 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57f6d746c6-t5ttl" event={"ID":"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3","Type":"ContainerStarted","Data":"d03795b34e323e682341abba024fe2d3c90f2f36cb2799ab0a81cdb1a8faaa97"} Dec 05 08:15:17 crc kubenswrapper[4780]: I1205 08:15:17.210141 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57f6d746c6-t5ttl" event={"ID":"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3","Type":"ContainerStarted","Data":"add5c81b3699c592939e1303aac3497c92117addbaad26f875d223a08c04aeb1"} Dec 05 08:15:17 crc kubenswrapper[4780]: I1205 08:15:17.210413 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:17 crc kubenswrapper[4780]: I1205 08:15:17.267602 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-57f6d746c6-t5ttl" podStartSLOduration=2.267540744 podStartE2EDuration="2.267540744s" podCreationTimestamp="2025-12-05 08:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:15:17.243334935 +0000 UTC m=+5351.312851267" watchObservedRunningTime="2025-12-05 08:15:17.267540744 +0000 UTC m=+5351.337057086" Dec 05 08:15:18 crc kubenswrapper[4780]: I1205 08:15:18.221094 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" event={"ID":"f03c71d0-0aaf-4433-b375-2baefd7abdb8","Type":"ContainerStarted","Data":"bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c"} Dec 05 08:15:18 crc kubenswrapper[4780]: I1205 08:15:18.221394 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:18 crc kubenswrapper[4780]: I1205 08:15:18.246837 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" podStartSLOduration=3.246815568 podStartE2EDuration="3.246815568s" podCreationTimestamp="2025-12-05 08:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:15:18.239170261 +0000 UTC m=+5352.308686603" watchObservedRunningTime="2025-12-05 08:15:18.246815568 +0000 UTC m=+5352.316331910" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.139271 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:15:19 crc kubenswrapper[4780]: E1205 08:15:19.139582 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.228057 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-745d445d4c-g6fcv"] Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.231373 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.235932 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.245430 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.246917 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-745d445d4c-g6fcv"] Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.277892 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-combined-ca-bundle\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.277954 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-config\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.278033 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-internal-tls-certs\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.278050 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-public-tls-certs\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.278132 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvm9b\" (UniqueName: \"kubernetes.io/projected/940ad4c5-eea8-4c74-af2b-475201d54bc4-kube-api-access-fvm9b\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.278157 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-httpd-config\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.278183 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-ovndb-tls-certs\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.379382 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-internal-tls-certs\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.379433 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-public-tls-certs\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.379508 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvm9b\" (UniqueName: \"kubernetes.io/projected/940ad4c5-eea8-4c74-af2b-475201d54bc4-kube-api-access-fvm9b\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.379535 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-httpd-config\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.379570 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-ovndb-tls-certs\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.379616 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-combined-ca-bundle\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.379659 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-config\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.389627 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-ovndb-tls-certs\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.389640 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-httpd-config\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.392647 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-config\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.393242 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-internal-tls-certs\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.400035 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-combined-ca-bundle\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.399276 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvm9b\" (UniqueName: \"kubernetes.io/projected/940ad4c5-eea8-4c74-af2b-475201d54bc4-kube-api-access-fvm9b\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.407824 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/940ad4c5-eea8-4c74-af2b-475201d54bc4-public-tls-certs\") pod \"neutron-745d445d4c-g6fcv\" (UID: \"940ad4c5-eea8-4c74-af2b-475201d54bc4\") " pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:19 crc kubenswrapper[4780]: I1205 08:15:19.559048 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:20 crc kubenswrapper[4780]: W1205 08:15:20.197691 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod940ad4c5_eea8_4c74_af2b_475201d54bc4.slice/crio-3312d6e46a4dd6191b68ac25a238b79178793ff30e1e637ad389af81e9bf6b06 WatchSource:0}: Error finding container 3312d6e46a4dd6191b68ac25a238b79178793ff30e1e637ad389af81e9bf6b06: Status 404 returned error can't find the container with id 3312d6e46a4dd6191b68ac25a238b79178793ff30e1e637ad389af81e9bf6b06 Dec 05 08:15:20 crc kubenswrapper[4780]: I1205 08:15:20.211797 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-745d445d4c-g6fcv"] Dec 05 08:15:20 crc kubenswrapper[4780]: I1205 08:15:20.251510 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-745d445d4c-g6fcv" event={"ID":"940ad4c5-eea8-4c74-af2b-475201d54bc4","Type":"ContainerStarted","Data":"3312d6e46a4dd6191b68ac25a238b79178793ff30e1e637ad389af81e9bf6b06"} Dec 05 08:15:21 crc kubenswrapper[4780]: I1205 08:15:21.263312 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-745d445d4c-g6fcv" event={"ID":"940ad4c5-eea8-4c74-af2b-475201d54bc4","Type":"ContainerStarted","Data":"fe20a5ee95f12e83256fd42d5f51c4a417a0880eb09370c978a7e6fe2fe35e20"} Dec 05 08:15:21 crc kubenswrapper[4780]: I1205 08:15:21.265865 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-745d445d4c-g6fcv" event={"ID":"940ad4c5-eea8-4c74-af2b-475201d54bc4","Type":"ContainerStarted","Data":"c5156a656712c0af6152b7a51d8b492e216f849931300f449c91775b10a539db"} Dec 05 08:15:21 crc kubenswrapper[4780]: I1205 08:15:21.266161 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:21 crc kubenswrapper[4780]: I1205 08:15:21.306358 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-745d445d4c-g6fcv" podStartSLOduration=2.306297964 podStartE2EDuration="2.306297964s" podCreationTimestamp="2025-12-05 08:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:15:21.297096413 +0000 UTC m=+5355.366612755" watchObservedRunningTime="2025-12-05 08:15:21.306297964 +0000 UTC m=+5355.375814316" Dec 05 08:15:25 crc kubenswrapper[4780]: I1205 08:15:25.423947 4780 scope.go:117] "RemoveContainer" containerID="4c7eaf1873617357f9731db8f19194c7241afa46d79a0f4d42926494706629d4" Dec 05 08:15:25 crc kubenswrapper[4780]: I1205 08:15:25.656166 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:15:25 crc kubenswrapper[4780]: I1205 08:15:25.714756 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb4bd68fc-jmr8q"] Dec 05 08:15:25 crc kubenswrapper[4780]: I1205 08:15:25.715002 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" podUID="d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" containerName="dnsmasq-dns" containerID="cri-o://4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787" gracePeriod=10 Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.207625 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.308452 4780 generic.go:334] "Generic (PLEG): container finished" podID="d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" containerID="4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787" exitCode=0 Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.308524 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" event={"ID":"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022","Type":"ContainerDied","Data":"4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787"} Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.308555 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" event={"ID":"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022","Type":"ContainerDied","Data":"f71a264aabe2256422b2910e1df231ab317ffb0f522f8da2f64d5062f050e3fe"} Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.308579 4780 scope.go:117] "RemoveContainer" containerID="4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.308756 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bb4bd68fc-jmr8q" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.320423 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-config\") pod \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.320472 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-dns-svc\") pod \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.320526 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-sb\") pod \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.320590 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stcxf\" (UniqueName: \"kubernetes.io/projected/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-kube-api-access-stcxf\") pod \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.320614 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-nb\") pod \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\" (UID: \"d4d4acd4-bbc5-44b5-a75d-8d1845e9b022\") " Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.333558 4780 scope.go:117] "RemoveContainer" containerID="f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.341150 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-kube-api-access-stcxf" (OuterVolumeSpecName: "kube-api-access-stcxf") pod "d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" (UID: "d4d4acd4-bbc5-44b5-a75d-8d1845e9b022"). InnerVolumeSpecName "kube-api-access-stcxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.377379 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-config" (OuterVolumeSpecName: "config") pod "d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" (UID: "d4d4acd4-bbc5-44b5-a75d-8d1845e9b022"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.380948 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" (UID: "d4d4acd4-bbc5-44b5-a75d-8d1845e9b022"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.385049 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" (UID: "d4d4acd4-bbc5-44b5-a75d-8d1845e9b022"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.404588 4780 scope.go:117] "RemoveContainer" containerID="4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787" Dec 05 08:15:26 crc kubenswrapper[4780]: E1205 08:15:26.405295 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787\": container with ID starting with 4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787 not found: ID does not exist" containerID="4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.405343 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787"} err="failed to get container status \"4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787\": rpc error: code = NotFound desc = could not find container \"4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787\": container with ID starting with 4b26298d5762acec34f947eaeb39450191a511e94dd0171651306e163f8f9787 not found: ID does not exist" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.405376 4780 scope.go:117] "RemoveContainer" containerID="f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0" Dec 05 08:15:26 crc kubenswrapper[4780]: E1205 08:15:26.405934 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0\": container with ID starting with f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0 not found: ID does not exist" containerID="f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.405996 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0"} err="failed to get container status \"f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0\": rpc error: code = NotFound desc = could not find container \"f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0\": container with ID starting with f58347eb75d7219438cfff6a0f57542082ebbd053f5fa1d4048858486d2a16e0 not found: ID does not exist" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.406827 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" (UID: "d4d4acd4-bbc5-44b5-a75d-8d1845e9b022"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.424419 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.424463 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.424476 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.424498 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stcxf\" (UniqueName: \"kubernetes.io/projected/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-kube-api-access-stcxf\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.424515 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.644608 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bb4bd68fc-jmr8q"] Dec 05 08:15:26 crc kubenswrapper[4780]: I1205 08:15:26.650081 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bb4bd68fc-jmr8q"] Dec 05 08:15:28 crc kubenswrapper[4780]: I1205 08:15:28.148413 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" path="/var/lib/kubelet/pods/d4d4acd4-bbc5-44b5-a75d-8d1845e9b022/volumes" Dec 05 08:15:32 crc kubenswrapper[4780]: I1205 08:15:32.138813 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:15:32 crc kubenswrapper[4780]: E1205 08:15:32.139402 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:15:43 crc kubenswrapper[4780]: I1205 08:15:43.139601 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:15:43 crc kubenswrapper[4780]: E1205 08:15:43.140752 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:15:45 crc kubenswrapper[4780]: I1205 08:15:45.802710 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:49 crc kubenswrapper[4780]: I1205 08:15:49.572406 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-745d445d4c-g6fcv" Dec 05 08:15:49 crc kubenswrapper[4780]: I1205 08:15:49.624448 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-57f6d746c6-t5ttl"] Dec 05 08:15:49 crc kubenswrapper[4780]: I1205 08:15:49.625841 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-57f6d746c6-t5ttl" podUID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" containerName="neutron-api" containerID="cri-o://d03795b34e323e682341abba024fe2d3c90f2f36cb2799ab0a81cdb1a8faaa97" gracePeriod=30 Dec 05 08:15:49 crc kubenswrapper[4780]: I1205 08:15:49.625973 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-57f6d746c6-t5ttl" podUID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" containerName="neutron-httpd" containerID="cri-o://6cc68c21af799afa8290da5e5f229cc099951f4191cb5d6187199ce97d3780a2" gracePeriod=30 Dec 05 08:15:50 crc kubenswrapper[4780]: I1205 08:15:50.518429 4780 generic.go:334] "Generic (PLEG): container finished" podID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" containerID="6cc68c21af799afa8290da5e5f229cc099951f4191cb5d6187199ce97d3780a2" exitCode=0 Dec 05 08:15:50 crc kubenswrapper[4780]: I1205 08:15:50.518508 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57f6d746c6-t5ttl" event={"ID":"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3","Type":"ContainerDied","Data":"6cc68c21af799afa8290da5e5f229cc099951f4191cb5d6187199ce97d3780a2"} Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.528012 4780 generic.go:334] "Generic (PLEG): container finished" podID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" containerID="d03795b34e323e682341abba024fe2d3c90f2f36cb2799ab0a81cdb1a8faaa97" exitCode=0 Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.528089 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57f6d746c6-t5ttl" event={"ID":"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3","Type":"ContainerDied","Data":"d03795b34e323e682341abba024fe2d3c90f2f36cb2799ab0a81cdb1a8faaa97"} Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.528121 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57f6d746c6-t5ttl" event={"ID":"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3","Type":"ContainerDied","Data":"add5c81b3699c592939e1303aac3497c92117addbaad26f875d223a08c04aeb1"} Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.528135 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="add5c81b3699c592939e1303aac3497c92117addbaad26f875d223a08c04aeb1" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.557123 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.613542 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-combined-ca-bundle\") pod \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.613687 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-ovndb-tls-certs\") pod \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.613750 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb7hd\" (UniqueName: \"kubernetes.io/projected/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-kube-api-access-qb7hd\") pod \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.613829 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-httpd-config\") pod \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.613914 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-config\") pod \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\" (UID: \"e5ece71d-d9cc-4d11-81bd-eaa28b5adff3\") " Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.624646 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" (UID: "e5ece71d-d9cc-4d11-81bd-eaa28b5adff3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.624683 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-kube-api-access-qb7hd" (OuterVolumeSpecName: "kube-api-access-qb7hd") pod "e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" (UID: "e5ece71d-d9cc-4d11-81bd-eaa28b5adff3"). InnerVolumeSpecName "kube-api-access-qb7hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.659465 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" (UID: "e5ece71d-d9cc-4d11-81bd-eaa28b5adff3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.661056 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-config" (OuterVolumeSpecName: "config") pod "e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" (UID: "e5ece71d-d9cc-4d11-81bd-eaa28b5adff3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.674754 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" (UID: "e5ece71d-d9cc-4d11-81bd-eaa28b5adff3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.715736 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.716319 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.716333 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.716346 4780 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:51 crc kubenswrapper[4780]: I1205 08:15:51.716359 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb7hd\" (UniqueName: \"kubernetes.io/projected/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3-kube-api-access-qb7hd\") on node \"crc\" DevicePath \"\"" Dec 05 08:15:52 crc kubenswrapper[4780]: I1205 08:15:52.535944 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57f6d746c6-t5ttl" Dec 05 08:15:52 crc kubenswrapper[4780]: I1205 08:15:52.557928 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-57f6d746c6-t5ttl"] Dec 05 08:15:52 crc kubenswrapper[4780]: I1205 08:15:52.566228 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-57f6d746c6-t5ttl"] Dec 05 08:15:54 crc kubenswrapper[4780]: I1205 08:15:54.149293 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" path="/var/lib/kubelet/pods/e5ece71d-d9cc-4d11-81bd-eaa28b5adff3/volumes" Dec 05 08:15:57 crc kubenswrapper[4780]: I1205 08:15:57.139132 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:15:57 crc kubenswrapper[4780]: E1205 08:15:57.139644 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.096438 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-962gs"] Dec 05 08:16:00 crc kubenswrapper[4780]: E1205 08:16:00.098139 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" containerName="dnsmasq-dns" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.098183 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" containerName="dnsmasq-dns" Dec 05 08:16:00 crc kubenswrapper[4780]: E1205 08:16:00.098219 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" containerName="neutron-api" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.098231 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" containerName="neutron-api" Dec 05 08:16:00 crc kubenswrapper[4780]: E1205 08:16:00.098255 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" containerName="neutron-httpd" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.098266 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" containerName="neutron-httpd" Dec 05 08:16:00 crc kubenswrapper[4780]: E1205 08:16:00.098320 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" containerName="init" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.098335 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" containerName="init" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.098745 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" containerName="neutron-api" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.098798 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d4acd4-bbc5-44b5-a75d-8d1845e9b022" containerName="dnsmasq-dns" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.098811 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ece71d-d9cc-4d11-81bd-eaa28b5adff3" containerName="neutron-httpd" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.100559 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.116851 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-962gs"] Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.159635 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vzk8\" (UniqueName: \"kubernetes.io/projected/72518957-da95-450c-96ed-98332545bbde-kube-api-access-5vzk8\") pod \"community-operators-962gs\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.159700 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-utilities\") pod \"community-operators-962gs\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.159727 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-catalog-content\") pod \"community-operators-962gs\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.262226 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vzk8\" (UniqueName: \"kubernetes.io/projected/72518957-da95-450c-96ed-98332545bbde-kube-api-access-5vzk8\") pod \"community-operators-962gs\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.262609 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-utilities\") pod \"community-operators-962gs\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.262760 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-catalog-content\") pod \"community-operators-962gs\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.263001 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-utilities\") pod \"community-operators-962gs\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.263706 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-catalog-content\") pod \"community-operators-962gs\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.281382 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vzk8\" (UniqueName: \"kubernetes.io/projected/72518957-da95-450c-96ed-98332545bbde-kube-api-access-5vzk8\") pod \"community-operators-962gs\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.428381 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:00 crc kubenswrapper[4780]: I1205 08:16:00.916168 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-962gs"] Dec 05 08:16:01 crc kubenswrapper[4780]: I1205 08:16:01.606096 4780 generic.go:334] "Generic (PLEG): container finished" podID="72518957-da95-450c-96ed-98332545bbde" containerID="9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3" exitCode=0 Dec 05 08:16:01 crc kubenswrapper[4780]: I1205 08:16:01.606133 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-962gs" event={"ID":"72518957-da95-450c-96ed-98332545bbde","Type":"ContainerDied","Data":"9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3"} Dec 05 08:16:01 crc kubenswrapper[4780]: I1205 08:16:01.606413 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-962gs" event={"ID":"72518957-da95-450c-96ed-98332545bbde","Type":"ContainerStarted","Data":"ef80ff0f475fcb4dfc1092a08f42151514f64be74ae4f51e0e343db11d57601f"} Dec 05 08:16:01 crc kubenswrapper[4780]: I1205 08:16:01.607978 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 08:16:02 crc kubenswrapper[4780]: I1205 08:16:02.614505 4780 generic.go:334] "Generic (PLEG): container finished" podID="72518957-da95-450c-96ed-98332545bbde" containerID="50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1" exitCode=0 Dec 05 08:16:02 crc kubenswrapper[4780]: I1205 08:16:02.614592 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-962gs" event={"ID":"72518957-da95-450c-96ed-98332545bbde","Type":"ContainerDied","Data":"50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1"} Dec 05 08:16:03 crc kubenswrapper[4780]: I1205 08:16:03.637791 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-962gs" event={"ID":"72518957-da95-450c-96ed-98332545bbde","Type":"ContainerStarted","Data":"55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5"} Dec 05 08:16:03 crc kubenswrapper[4780]: I1205 08:16:03.661493 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-962gs" podStartSLOduration=2.221826502 podStartE2EDuration="3.661472189s" podCreationTimestamp="2025-12-05 08:16:00 +0000 UTC" firstStartedPulling="2025-12-05 08:16:01.60774981 +0000 UTC m=+5395.677266132" lastFinishedPulling="2025-12-05 08:16:03.047395487 +0000 UTC m=+5397.116911819" observedRunningTime="2025-12-05 08:16:03.654968792 +0000 UTC m=+5397.724485114" watchObservedRunningTime="2025-12-05 08:16:03.661472189 +0000 UTC m=+5397.730988521" Dec 05 08:16:10 crc kubenswrapper[4780]: I1205 08:16:10.429474 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:10 crc kubenswrapper[4780]: I1205 08:16:10.430165 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:10 crc kubenswrapper[4780]: I1205 08:16:10.466926 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:10 crc kubenswrapper[4780]: I1205 08:16:10.775652 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:10 crc kubenswrapper[4780]: I1205 08:16:10.819447 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-962gs"] Dec 05 08:16:12 crc kubenswrapper[4780]: I1205 08:16:12.140022 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:16:12 crc kubenswrapper[4780]: E1205 08:16:12.140839 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:16:12 crc kubenswrapper[4780]: I1205 08:16:12.715244 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-962gs" podUID="72518957-da95-450c-96ed-98332545bbde" containerName="registry-server" containerID="cri-o://55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5" gracePeriod=2 Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.137540 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.230556 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vzk8\" (UniqueName: \"kubernetes.io/projected/72518957-da95-450c-96ed-98332545bbde-kube-api-access-5vzk8\") pod \"72518957-da95-450c-96ed-98332545bbde\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.237064 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72518957-da95-450c-96ed-98332545bbde-kube-api-access-5vzk8" (OuterVolumeSpecName: "kube-api-access-5vzk8") pod "72518957-da95-450c-96ed-98332545bbde" (UID: "72518957-da95-450c-96ed-98332545bbde"). InnerVolumeSpecName "kube-api-access-5vzk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.333424 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-catalog-content\") pod \"72518957-da95-450c-96ed-98332545bbde\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.333578 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-utilities\") pod \"72518957-da95-450c-96ed-98332545bbde\" (UID: \"72518957-da95-450c-96ed-98332545bbde\") " Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.334087 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vzk8\" (UniqueName: \"kubernetes.io/projected/72518957-da95-450c-96ed-98332545bbde-kube-api-access-5vzk8\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.334235 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-utilities" (OuterVolumeSpecName: "utilities") pod "72518957-da95-450c-96ed-98332545bbde" (UID: "72518957-da95-450c-96ed-98332545bbde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.381285 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72518957-da95-450c-96ed-98332545bbde" (UID: "72518957-da95-450c-96ed-98332545bbde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.435770 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.435804 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72518957-da95-450c-96ed-98332545bbde-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.729087 4780 generic.go:334] "Generic (PLEG): container finished" podID="72518957-da95-450c-96ed-98332545bbde" containerID="55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5" exitCode=0 Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.729139 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-962gs" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.729136 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-962gs" event={"ID":"72518957-da95-450c-96ed-98332545bbde","Type":"ContainerDied","Data":"55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5"} Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.729261 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-962gs" event={"ID":"72518957-da95-450c-96ed-98332545bbde","Type":"ContainerDied","Data":"ef80ff0f475fcb4dfc1092a08f42151514f64be74ae4f51e0e343db11d57601f"} Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.729285 4780 scope.go:117] "RemoveContainer" containerID="55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.750560 4780 scope.go:117] "RemoveContainer" containerID="50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.758567 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-962gs"] Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.789449 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-962gs"] Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.794368 4780 scope.go:117] "RemoveContainer" containerID="9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.809187 4780 scope.go:117] "RemoveContainer" containerID="55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5" Dec 05 08:16:13 crc kubenswrapper[4780]: E1205 08:16:13.809578 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5\": container with ID starting with 55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5 not found: ID does not exist" containerID="55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.809625 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5"} err="failed to get container status \"55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5\": rpc error: code = NotFound desc = could not find container \"55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5\": container with ID starting with 55cf2d4a9b519c8de879327415416db643d19bbc35c7a6babdee5265f5411cb5 not found: ID does not exist" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.809651 4780 scope.go:117] "RemoveContainer" containerID="50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1" Dec 05 08:16:13 crc kubenswrapper[4780]: E1205 08:16:13.810052 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1\": container with ID starting with 50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1 not found: ID does not exist" containerID="50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.810082 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1"} err="failed to get container status \"50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1\": rpc error: code = NotFound desc = could not find container \"50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1\": container with ID starting with 50c1d2d38bc4061b30d14deef0fdc41f3a86918bcf4e8cbfc753857b6aba7df1 not found: ID does not exist" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.810104 4780 scope.go:117] "RemoveContainer" containerID="9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3" Dec 05 08:16:13 crc kubenswrapper[4780]: E1205 08:16:13.810326 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3\": container with ID starting with 9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3 not found: ID does not exist" containerID="9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3" Dec 05 08:16:13 crc kubenswrapper[4780]: I1205 08:16:13.810354 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3"} err="failed to get container status \"9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3\": rpc error: code = NotFound desc = could not find container \"9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3\": container with ID starting with 9b7ab450abd4f2081e899c971d5b66225d5b6ef1ad3300cacecda01d612649d3 not found: ID does not exist" Dec 05 08:16:14 crc kubenswrapper[4780]: I1205 08:16:14.154140 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72518957-da95-450c-96ed-98332545bbde" path="/var/lib/kubelet/pods/72518957-da95-450c-96ed-98332545bbde/volumes" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.744398 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-49kwx"] Dec 05 08:16:15 crc kubenswrapper[4780]: E1205 08:16:15.747549 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72518957-da95-450c-96ed-98332545bbde" containerName="extract-utilities" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.747577 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="72518957-da95-450c-96ed-98332545bbde" containerName="extract-utilities" Dec 05 08:16:15 crc kubenswrapper[4780]: E1205 08:16:15.747599 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72518957-da95-450c-96ed-98332545bbde" containerName="registry-server" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.747606 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="72518957-da95-450c-96ed-98332545bbde" containerName="registry-server" Dec 05 08:16:15 crc kubenswrapper[4780]: E1205 08:16:15.747621 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72518957-da95-450c-96ed-98332545bbde" containerName="extract-content" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.747629 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="72518957-da95-450c-96ed-98332545bbde" containerName="extract-content" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.747839 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="72518957-da95-450c-96ed-98332545bbde" containerName="registry-server" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.748460 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.750338 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.752243 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dfcgv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.752243 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.756485 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.756730 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.769745 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-dispersionconf\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.769804 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wp8\" (UniqueName: \"kubernetes.io/projected/42b47e2e-4b29-4148-a992-34d12379b270-kube-api-access-96wp8\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.769829 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-ring-data-devices\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.769833 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-49kwx"] Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.769849 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/42b47e2e-4b29-4148-a992-34d12379b270-etc-swift\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.770188 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-scripts\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.770247 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-swiftconf\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.770349 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-combined-ca-bundle\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.816399 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b67b458c7-282fv"] Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.818135 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.859107 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b67b458c7-282fv"] Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872233 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/42b47e2e-4b29-4148-a992-34d12379b270-etc-swift\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872308 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-dns-svc\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872326 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-config\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872347 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872397 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnqvb\" (UniqueName: \"kubernetes.io/projected/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-kube-api-access-rnqvb\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872433 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872467 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-scripts\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872509 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-swiftconf\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872547 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-combined-ca-bundle\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872572 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-dispersionconf\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872607 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96wp8\" (UniqueName: \"kubernetes.io/projected/42b47e2e-4b29-4148-a992-34d12379b270-kube-api-access-96wp8\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.872641 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-ring-data-devices\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.873437 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-ring-data-devices\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.873671 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/42b47e2e-4b29-4148-a992-34d12379b270-etc-swift\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.884529 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-scripts\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.885833 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-swiftconf\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.887681 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-dispersionconf\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.902616 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-combined-ca-bundle\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.914077 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96wp8\" (UniqueName: \"kubernetes.io/projected/42b47e2e-4b29-4148-a992-34d12379b270-kube-api-access-96wp8\") pod \"swift-ring-rebalance-49kwx\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.984871 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-dns-svc\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.985102 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-config\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.985133 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.990488 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnqvb\" (UniqueName: \"kubernetes.io/projected/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-kube-api-access-rnqvb\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.990640 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.991260 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:15 crc kubenswrapper[4780]: I1205 08:16:15.992120 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:15.994466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-config\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:15.995248 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-dns-svc\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:16.029789 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnqvb\" (UniqueName: \"kubernetes.io/projected/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-kube-api-access-rnqvb\") pod \"dnsmasq-dns-7b67b458c7-282fv\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:16.071742 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:16.143005 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:16.514203 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b67b458c7-282fv"] Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:16.604476 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-49kwx"] Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:16.757287 4780 generic.go:334] "Generic (PLEG): container finished" podID="005d187e-0a30-4c2b-8480-c9dd9c53fcd7" containerID="2381b03ee8fe1ead2704c8928e5177c3b472fc60557ceb7479b761bc2609d263" exitCode=0 Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:16.757463 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" event={"ID":"005d187e-0a30-4c2b-8480-c9dd9c53fcd7","Type":"ContainerDied","Data":"2381b03ee8fe1ead2704c8928e5177c3b472fc60557ceb7479b761bc2609d263"} Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:16.758899 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" event={"ID":"005d187e-0a30-4c2b-8480-c9dd9c53fcd7","Type":"ContainerStarted","Data":"e69353d16fcbaa62f2d57bc7660f35e81c8cc7f11949c3f45bc943a782e07273"} Dec 05 08:16:16 crc kubenswrapper[4780]: I1205 08:16:16.763581 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-49kwx" event={"ID":"42b47e2e-4b29-4148-a992-34d12379b270","Type":"ContainerStarted","Data":"5ce2f7d19018871b2ed8b5f270cb1e3ef4326b1370237b883c98c35ba0757f62"} Dec 05 08:16:17 crc kubenswrapper[4780]: I1205 08:16:17.790068 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" event={"ID":"005d187e-0a30-4c2b-8480-c9dd9c53fcd7","Type":"ContainerStarted","Data":"c8155498f5450db309d81d7857e0bd505348d241b244901951a7e91190a8c862"} Dec 05 08:16:17 crc kubenswrapper[4780]: I1205 08:16:17.791546 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:17 crc kubenswrapper[4780]: I1205 08:16:17.817523 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" podStartSLOduration=2.8174990600000003 podStartE2EDuration="2.81749906s" podCreationTimestamp="2025-12-05 08:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:16:17.811853626 +0000 UTC m=+5411.881369978" watchObservedRunningTime="2025-12-05 08:16:17.81749906 +0000 UTC m=+5411.887015392" Dec 05 08:16:17 crc kubenswrapper[4780]: I1205 08:16:17.913251 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6b7f7855cb-7dfwb"] Dec 05 08:16:17 crc kubenswrapper[4780]: I1205 08:16:17.915415 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:17 crc kubenswrapper[4780]: I1205 08:16:17.918652 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 05 08:16:17 crc kubenswrapper[4780]: I1205 08:16:17.929629 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b7f7855cb-7dfwb"] Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.042116 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-run-httpd\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.042186 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-log-httpd\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.042318 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-combined-ca-bundle\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.042385 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-etc-swift\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.042437 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-config-data\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.042827 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mc4q\" (UniqueName: \"kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-kube-api-access-4mc4q\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.144549 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mc4q\" (UniqueName: \"kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-kube-api-access-4mc4q\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.144634 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-run-httpd\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.144680 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-log-httpd\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.144752 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-combined-ca-bundle\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.144777 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-etc-swift\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.144798 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-config-data\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.145171 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-run-httpd\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.145470 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-log-httpd\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.150432 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-combined-ca-bundle\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.151361 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-etc-swift\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.151790 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-config-data\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.163684 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mc4q\" (UniqueName: \"kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-kube-api-access-4mc4q\") pod \"swift-proxy-6b7f7855cb-7dfwb\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:18 crc kubenswrapper[4780]: I1205 08:16:18.243139 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.011965 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b7f7855cb-7dfwb"] Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.323437 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-56fff8bbb4-tvswm"] Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.325183 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.327578 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.327717 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.343371 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-56fff8bbb4-tvswm"] Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.473937 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-public-tls-certs\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.474442 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca38aa6-7f9c-470a-a24d-80def95f09f7-log-httpd\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.474509 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca38aa6-7f9c-470a-a24d-80def95f09f7-run-httpd\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.474538 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n2kf\" (UniqueName: \"kubernetes.io/projected/aca38aa6-7f9c-470a-a24d-80def95f09f7-kube-api-access-2n2kf\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.474635 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-combined-ca-bundle\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.474665 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aca38aa6-7f9c-470a-a24d-80def95f09f7-etc-swift\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.474690 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-internal-tls-certs\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.474738 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-config-data\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.577632 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-combined-ca-bundle\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.579378 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aca38aa6-7f9c-470a-a24d-80def95f09f7-etc-swift\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.579436 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-internal-tls-certs\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.579523 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-config-data\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.579765 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-public-tls-certs\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.579900 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca38aa6-7f9c-470a-a24d-80def95f09f7-log-httpd\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.580015 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca38aa6-7f9c-470a-a24d-80def95f09f7-run-httpd\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.580100 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n2kf\" (UniqueName: \"kubernetes.io/projected/aca38aa6-7f9c-470a-a24d-80def95f09f7-kube-api-access-2n2kf\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.580687 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca38aa6-7f9c-470a-a24d-80def95f09f7-run-httpd\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.580687 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca38aa6-7f9c-470a-a24d-80def95f09f7-log-httpd\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.587341 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-combined-ca-bundle\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.587375 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-internal-tls-certs\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.588021 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-public-tls-certs\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.590561 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aca38aa6-7f9c-470a-a24d-80def95f09f7-etc-swift\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.595439 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca38aa6-7f9c-470a-a24d-80def95f09f7-config-data\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.601567 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n2kf\" (UniqueName: \"kubernetes.io/projected/aca38aa6-7f9c-470a-a24d-80def95f09f7-kube-api-access-2n2kf\") pod \"swift-proxy-56fff8bbb4-tvswm\" (UID: \"aca38aa6-7f9c-470a-a24d-80def95f09f7\") " pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:19 crc kubenswrapper[4780]: I1205 08:16:19.646857 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:20 crc kubenswrapper[4780]: W1205 08:16:20.829183 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ba0be0b_fe8c_4c2b_9800_86933a2b71e7.slice/crio-a940bad831d9564d45cc96293d9c04f2437b068ab4111a1ed8dac46cd1ba2b6d WatchSource:0}: Error finding container a940bad831d9564d45cc96293d9c04f2437b068ab4111a1ed8dac46cd1ba2b6d: Status 404 returned error can't find the container with id a940bad831d9564d45cc96293d9c04f2437b068ab4111a1ed8dac46cd1ba2b6d Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.455475 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-56fff8bbb4-tvswm"] Dec 05 08:16:21 crc kubenswrapper[4780]: W1205 08:16:21.470753 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaca38aa6_7f9c_470a_a24d_80def95f09f7.slice/crio-1f5c775c2680579fedc0183675e02f39eb460103964dd0ce99961f8358bc14f5 WatchSource:0}: Error finding container 1f5c775c2680579fedc0183675e02f39eb460103964dd0ce99961f8358bc14f5: Status 404 returned error can't find the container with id 1f5c775c2680579fedc0183675e02f39eb460103964dd0ce99961f8358bc14f5 Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.831438 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-49kwx" event={"ID":"42b47e2e-4b29-4148-a992-34d12379b270","Type":"ContainerStarted","Data":"37b17a169a1564a6549e5c07920261b1d3cabc33783dc46ccf8d5f3b2ec4de3f"} Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.834389 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56fff8bbb4-tvswm" event={"ID":"aca38aa6-7f9c-470a-a24d-80def95f09f7","Type":"ContainerStarted","Data":"5b8c070d15f9c02704adf7bef544e7f098bff6a5ba3738e6aeaf21309867973e"} Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.834430 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56fff8bbb4-tvswm" event={"ID":"aca38aa6-7f9c-470a-a24d-80def95f09f7","Type":"ContainerStarted","Data":"cd281d03a4e9e77a6de90fa8e6476cf98ae46c5466777564e2ccc4548120cb84"} Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.834444 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56fff8bbb4-tvswm" event={"ID":"aca38aa6-7f9c-470a-a24d-80def95f09f7","Type":"ContainerStarted","Data":"1f5c775c2680579fedc0183675e02f39eb460103964dd0ce99961f8358bc14f5"} Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.834604 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.836117 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" event={"ID":"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7","Type":"ContainerStarted","Data":"18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85"} Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.836160 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" event={"ID":"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7","Type":"ContainerStarted","Data":"2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae"} Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.836176 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" event={"ID":"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7","Type":"ContainerStarted","Data":"a940bad831d9564d45cc96293d9c04f2437b068ab4111a1ed8dac46cd1ba2b6d"} Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.836275 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.854224 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-49kwx" podStartSLOduration=2.590899838 podStartE2EDuration="6.854205135s" podCreationTimestamp="2025-12-05 08:16:15 +0000 UTC" firstStartedPulling="2025-12-05 08:16:16.61669047 +0000 UTC m=+5410.686206802" lastFinishedPulling="2025-12-05 08:16:20.879995767 +0000 UTC m=+5414.949512099" observedRunningTime="2025-12-05 08:16:21.847595075 +0000 UTC m=+5415.917111417" watchObservedRunningTime="2025-12-05 08:16:21.854205135 +0000 UTC m=+5415.923721467" Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.871771 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" podStartSLOduration=4.70961049 podStartE2EDuration="4.87170747s" podCreationTimestamp="2025-12-05 08:16:17 +0000 UTC" firstStartedPulling="2025-12-05 08:16:20.837009817 +0000 UTC m=+5414.906526159" lastFinishedPulling="2025-12-05 08:16:20.999106817 +0000 UTC m=+5415.068623139" observedRunningTime="2025-12-05 08:16:21.866677163 +0000 UTC m=+5415.936193495" watchObservedRunningTime="2025-12-05 08:16:21.87170747 +0000 UTC m=+5415.941223802" Dec 05 08:16:21 crc kubenswrapper[4780]: I1205 08:16:21.901994 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-56fff8bbb4-tvswm" podStartSLOduration=2.901979394 podStartE2EDuration="2.901979394s" podCreationTimestamp="2025-12-05 08:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:16:21.894268394 +0000 UTC m=+5415.963784726" watchObservedRunningTime="2025-12-05 08:16:21.901979394 +0000 UTC m=+5415.971495726" Dec 05 08:16:22 crc kubenswrapper[4780]: I1205 08:16:22.845924 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:22 crc kubenswrapper[4780]: I1205 08:16:22.846243 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:25 crc kubenswrapper[4780]: I1205 08:16:25.871005 4780 generic.go:334] "Generic (PLEG): container finished" podID="42b47e2e-4b29-4148-a992-34d12379b270" containerID="37b17a169a1564a6549e5c07920261b1d3cabc33783dc46ccf8d5f3b2ec4de3f" exitCode=0 Dec 05 08:16:25 crc kubenswrapper[4780]: I1205 08:16:25.871139 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-49kwx" event={"ID":"42b47e2e-4b29-4148-a992-34d12379b270","Type":"ContainerDied","Data":"37b17a169a1564a6549e5c07920261b1d3cabc33783dc46ccf8d5f3b2ec4de3f"} Dec 05 08:16:26 crc kubenswrapper[4780]: I1205 08:16:26.150663 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:16:26 crc kubenswrapper[4780]: E1205 08:16:26.152022 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:16:26 crc kubenswrapper[4780]: I1205 08:16:26.152840 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:16:26 crc kubenswrapper[4780]: I1205 08:16:26.208486 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fccdb759-pdxg8"] Dec 05 08:16:26 crc kubenswrapper[4780]: I1205 08:16:26.880918 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" podUID="f03c71d0-0aaf-4433-b375-2baefd7abdb8" containerName="dnsmasq-dns" containerID="cri-o://bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c" gracePeriod=10 Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.310779 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.317997 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447196 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-dns-svc\") pod \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447245 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-ring-data-devices\") pod \"42b47e2e-4b29-4148-a992-34d12379b270\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447279 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mdf6\" (UniqueName: \"kubernetes.io/projected/f03c71d0-0aaf-4433-b375-2baefd7abdb8-kube-api-access-2mdf6\") pod \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447322 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-combined-ca-bundle\") pod \"42b47e2e-4b29-4148-a992-34d12379b270\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447358 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-scripts\") pod \"42b47e2e-4b29-4148-a992-34d12379b270\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447389 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-nb\") pod \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447407 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-dispersionconf\") pod \"42b47e2e-4b29-4148-a992-34d12379b270\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447439 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-swiftconf\") pod \"42b47e2e-4b29-4148-a992-34d12379b270\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447495 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-sb\") pod \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447523 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/42b47e2e-4b29-4148-a992-34d12379b270-etc-swift\") pod \"42b47e2e-4b29-4148-a992-34d12379b270\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447610 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-config\") pod \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\" (UID: \"f03c71d0-0aaf-4433-b375-2baefd7abdb8\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447677 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96wp8\" (UniqueName: \"kubernetes.io/projected/42b47e2e-4b29-4148-a992-34d12379b270-kube-api-access-96wp8\") pod \"42b47e2e-4b29-4148-a992-34d12379b270\" (UID: \"42b47e2e-4b29-4148-a992-34d12379b270\") " Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.447670 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "42b47e2e-4b29-4148-a992-34d12379b270" (UID: "42b47e2e-4b29-4148-a992-34d12379b270"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.448179 4780 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.448567 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b47e2e-4b29-4148-a992-34d12379b270-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "42b47e2e-4b29-4148-a992-34d12379b270" (UID: "42b47e2e-4b29-4148-a992-34d12379b270"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.462376 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03c71d0-0aaf-4433-b375-2baefd7abdb8-kube-api-access-2mdf6" (OuterVolumeSpecName: "kube-api-access-2mdf6") pod "f03c71d0-0aaf-4433-b375-2baefd7abdb8" (UID: "f03c71d0-0aaf-4433-b375-2baefd7abdb8"). InnerVolumeSpecName "kube-api-access-2mdf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.462467 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b47e2e-4b29-4148-a992-34d12379b270-kube-api-access-96wp8" (OuterVolumeSpecName: "kube-api-access-96wp8") pod "42b47e2e-4b29-4148-a992-34d12379b270" (UID: "42b47e2e-4b29-4148-a992-34d12379b270"). InnerVolumeSpecName "kube-api-access-96wp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.466470 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "42b47e2e-4b29-4148-a992-34d12379b270" (UID: "42b47e2e-4b29-4148-a992-34d12379b270"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.474095 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42b47e2e-4b29-4148-a992-34d12379b270" (UID: "42b47e2e-4b29-4148-a992-34d12379b270"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.477315 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "42b47e2e-4b29-4148-a992-34d12379b270" (UID: "42b47e2e-4b29-4148-a992-34d12379b270"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.477621 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-scripts" (OuterVolumeSpecName: "scripts") pod "42b47e2e-4b29-4148-a992-34d12379b270" (UID: "42b47e2e-4b29-4148-a992-34d12379b270"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.502864 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f03c71d0-0aaf-4433-b375-2baefd7abdb8" (UID: "f03c71d0-0aaf-4433-b375-2baefd7abdb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.503983 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f03c71d0-0aaf-4433-b375-2baefd7abdb8" (UID: "f03c71d0-0aaf-4433-b375-2baefd7abdb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.507308 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-config" (OuterVolumeSpecName: "config") pod "f03c71d0-0aaf-4433-b375-2baefd7abdb8" (UID: "f03c71d0-0aaf-4433-b375-2baefd7abdb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.509241 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f03c71d0-0aaf-4433-b375-2baefd7abdb8" (UID: "f03c71d0-0aaf-4433-b375-2baefd7abdb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550005 4780 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/42b47e2e-4b29-4148-a992-34d12379b270-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550051 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550064 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96wp8\" (UniqueName: \"kubernetes.io/projected/42b47e2e-4b29-4148-a992-34d12379b270-kube-api-access-96wp8\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550076 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550085 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mdf6\" (UniqueName: \"kubernetes.io/projected/f03c71d0-0aaf-4433-b375-2baefd7abdb8-kube-api-access-2mdf6\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550094 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550104 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42b47e2e-4b29-4148-a992-34d12379b270-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550114 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550124 4780 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550132 4780 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/42b47e2e-4b29-4148-a992-34d12379b270-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.550141 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f03c71d0-0aaf-4433-b375-2baefd7abdb8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.892073 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-49kwx" event={"ID":"42b47e2e-4b29-4148-a992-34d12379b270","Type":"ContainerDied","Data":"5ce2f7d19018871b2ed8b5f270cb1e3ef4326b1370237b883c98c35ba0757f62"} Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.892117 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ce2f7d19018871b2ed8b5f270cb1e3ef4326b1370237b883c98c35ba0757f62" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.892250 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-49kwx" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.903189 4780 generic.go:334] "Generic (PLEG): container finished" podID="f03c71d0-0aaf-4433-b375-2baefd7abdb8" containerID="bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c" exitCode=0 Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.903260 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" event={"ID":"f03c71d0-0aaf-4433-b375-2baefd7abdb8","Type":"ContainerDied","Data":"bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c"} Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.903323 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" event={"ID":"f03c71d0-0aaf-4433-b375-2baefd7abdb8","Type":"ContainerDied","Data":"c016e25c7574dd95207417d5107ead0a6100365d4706f78e115ea12d8df055d5"} Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.903338 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fccdb759-pdxg8" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.903346 4780 scope.go:117] "RemoveContainer" containerID="bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.927099 4780 scope.go:117] "RemoveContainer" containerID="fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.950544 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fccdb759-pdxg8"] Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.955303 4780 scope.go:117] "RemoveContainer" containerID="bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c" Dec 05 08:16:27 crc kubenswrapper[4780]: E1205 08:16:27.957269 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c\": container with ID starting with bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c not found: ID does not exist" containerID="bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.957359 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c"} err="failed to get container status \"bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c\": rpc error: code = NotFound desc = could not find container \"bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c\": container with ID starting with bd7f146acb257428c33ff5e9436713f27985871b4ec753b283d466f09c58353c not found: ID does not exist" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.957402 4780 scope.go:117] "RemoveContainer" containerID="fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e" Dec 05 08:16:27 crc kubenswrapper[4780]: E1205 08:16:27.958041 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e\": container with ID starting with fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e not found: ID does not exist" containerID="fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.958227 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e"} err="failed to get container status \"fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e\": rpc error: code = NotFound desc = could not find container \"fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e\": container with ID starting with fbb3fc02064a0c9c692da7cb910ab95400bda4e1448a7c1468d93ae70a46215e not found: ID does not exist" Dec 05 08:16:27 crc kubenswrapper[4780]: I1205 08:16:27.960245 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fccdb759-pdxg8"] Dec 05 08:16:28 crc kubenswrapper[4780]: I1205 08:16:28.148778 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03c71d0-0aaf-4433-b375-2baefd7abdb8" path="/var/lib/kubelet/pods/f03c71d0-0aaf-4433-b375-2baefd7abdb8/volumes" Dec 05 08:16:28 crc kubenswrapper[4780]: I1205 08:16:28.245986 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:28 crc kubenswrapper[4780]: I1205 08:16:28.247229 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:29 crc kubenswrapper[4780]: I1205 08:16:29.670464 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:29 crc kubenswrapper[4780]: I1205 08:16:29.673056 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-56fff8bbb4-tvswm" Dec 05 08:16:29 crc kubenswrapper[4780]: I1205 08:16:29.755674 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6b7f7855cb-7dfwb"] Dec 05 08:16:29 crc kubenswrapper[4780]: I1205 08:16:29.922772 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" podUID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" containerName="proxy-httpd" containerID="cri-o://2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae" gracePeriod=30 Dec 05 08:16:29 crc kubenswrapper[4780]: I1205 08:16:29.924098 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" podUID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" containerName="proxy-server" containerID="cri-o://18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85" gracePeriod=30 Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.752358 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.909348 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-etc-swift\") pod \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.909464 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mc4q\" (UniqueName: \"kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-kube-api-access-4mc4q\") pod \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.909548 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-log-httpd\") pod \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.909640 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-combined-ca-bundle\") pod \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.909736 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-run-httpd\") pod \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.909815 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-config-data\") pod \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\" (UID: \"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7\") " Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.911414 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" (UID: "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.911828 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" (UID: "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.931809 4780 generic.go:334] "Generic (PLEG): container finished" podID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" containerID="18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85" exitCode=0 Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.931846 4780 generic.go:334] "Generic (PLEG): container finished" podID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" containerID="2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae" exitCode=0 Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.931871 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" event={"ID":"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7","Type":"ContainerDied","Data":"18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85"} Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.931922 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" event={"ID":"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7","Type":"ContainerDied","Data":"2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae"} Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.931939 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" event={"ID":"5ba0be0b-fe8c-4c2b-9800-86933a2b71e7","Type":"ContainerDied","Data":"a940bad831d9564d45cc96293d9c04f2437b068ab4111a1ed8dac46cd1ba2b6d"} Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.931931 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b7f7855cb-7dfwb" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.932079 4780 scope.go:117] "RemoveContainer" containerID="18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.938397 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-kube-api-access-4mc4q" (OuterVolumeSpecName: "kube-api-access-4mc4q") pod "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" (UID: "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7"). InnerVolumeSpecName "kube-api-access-4mc4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.940050 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" (UID: "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.973373 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-config-data" (OuterVolumeSpecName: "config-data") pod "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" (UID: "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.973386 4780 scope.go:117] "RemoveContainer" containerID="2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.973477 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" (UID: "5ba0be0b-fe8c-4c2b-9800-86933a2b71e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.991447 4780 scope.go:117] "RemoveContainer" containerID="18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85" Dec 05 08:16:30 crc kubenswrapper[4780]: E1205 08:16:30.991934 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85\": container with ID starting with 18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85 not found: ID does not exist" containerID="18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.991981 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85"} err="failed to get container status \"18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85\": rpc error: code = NotFound desc = could not find container \"18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85\": container with ID starting with 18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85 not found: ID does not exist" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.992005 4780 scope.go:117] "RemoveContainer" containerID="2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae" Dec 05 08:16:30 crc kubenswrapper[4780]: E1205 08:16:30.992457 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae\": container with ID starting with 2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae not found: ID does not exist" containerID="2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.992512 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae"} err="failed to get container status \"2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae\": rpc error: code = NotFound desc = could not find container \"2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae\": container with ID starting with 2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae not found: ID does not exist" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.992544 4780 scope.go:117] "RemoveContainer" containerID="18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.992840 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85"} err="failed to get container status \"18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85\": rpc error: code = NotFound desc = could not find container \"18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85\": container with ID starting with 18673140e82c907bf71d5f29c8c17bb221d49482698c1951ac536c4609528e85 not found: ID does not exist" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.992864 4780 scope.go:117] "RemoveContainer" containerID="2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae" Dec 05 08:16:30 crc kubenswrapper[4780]: I1205 08:16:30.993181 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae"} err="failed to get container status \"2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae\": rpc error: code = NotFound desc = could not find container \"2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae\": container with ID starting with 2119cc06d2da2d8b6eb1e1e57bd7ed28c39a40acf569343292a0a5306f85beae not found: ID does not exist" Dec 05 08:16:31 crc kubenswrapper[4780]: I1205 08:16:31.011569 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:31 crc kubenswrapper[4780]: I1205 08:16:31.011610 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:31 crc kubenswrapper[4780]: I1205 08:16:31.011621 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:31 crc kubenswrapper[4780]: I1205 08:16:31.011630 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:31 crc kubenswrapper[4780]: I1205 08:16:31.011640 4780 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:31 crc kubenswrapper[4780]: I1205 08:16:31.011651 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mc4q\" (UniqueName: \"kubernetes.io/projected/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7-kube-api-access-4mc4q\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:31 crc kubenswrapper[4780]: I1205 08:16:31.257368 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6b7f7855cb-7dfwb"] Dec 05 08:16:31 crc kubenswrapper[4780]: I1205 08:16:31.265485 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6b7f7855cb-7dfwb"] Dec 05 08:16:32 crc kubenswrapper[4780]: I1205 08:16:32.156256 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" path="/var/lib/kubelet/pods/5ba0be0b-fe8c-4c2b-9800-86933a2b71e7/volumes" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.704813 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-bkzrr"] Dec 05 08:16:35 crc kubenswrapper[4780]: E1205 08:16:35.705593 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" containerName="proxy-server" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.705611 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" containerName="proxy-server" Dec 05 08:16:35 crc kubenswrapper[4780]: E1205 08:16:35.705622 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b47e2e-4b29-4148-a992-34d12379b270" containerName="swift-ring-rebalance" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.705631 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b47e2e-4b29-4148-a992-34d12379b270" containerName="swift-ring-rebalance" Dec 05 08:16:35 crc kubenswrapper[4780]: E1205 08:16:35.705649 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03c71d0-0aaf-4433-b375-2baefd7abdb8" containerName="init" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.705657 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03c71d0-0aaf-4433-b375-2baefd7abdb8" containerName="init" Dec 05 08:16:35 crc kubenswrapper[4780]: E1205 08:16:35.705669 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03c71d0-0aaf-4433-b375-2baefd7abdb8" containerName="dnsmasq-dns" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.705677 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03c71d0-0aaf-4433-b375-2baefd7abdb8" containerName="dnsmasq-dns" Dec 05 08:16:35 crc kubenswrapper[4780]: E1205 08:16:35.705693 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" containerName="proxy-httpd" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.705701 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" containerName="proxy-httpd" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.705949 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" containerName="proxy-httpd" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.705966 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba0be0b-fe8c-4c2b-9800-86933a2b71e7" containerName="proxy-server" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.705984 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03c71d0-0aaf-4433-b375-2baefd7abdb8" containerName="dnsmasq-dns" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.705997 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b47e2e-4b29-4148-a992-34d12379b270" containerName="swift-ring-rebalance" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.706706 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bkzrr" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.713541 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9cf9-account-create-update-dgcgh"] Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.715077 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9cf9-account-create-update-dgcgh" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.717114 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.722228 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bkzrr"] Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.733493 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9cf9-account-create-update-dgcgh"] Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.802118 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mghcl\" (UniqueName: \"kubernetes.io/projected/e3b0f76e-f680-4432-acaf-3bb47c0dea49-kube-api-access-mghcl\") pod \"cinder-db-create-bkzrr\" (UID: \"e3b0f76e-f680-4432-acaf-3bb47c0dea49\") " pod="openstack/cinder-db-create-bkzrr" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.802180 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsr7k\" (UniqueName: \"kubernetes.io/projected/831d476c-5e00-427c-8221-c65eb889ca3c-kube-api-access-qsr7k\") pod \"cinder-9cf9-account-create-update-dgcgh\" (UID: \"831d476c-5e00-427c-8221-c65eb889ca3c\") " pod="openstack/cinder-9cf9-account-create-update-dgcgh" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.802694 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831d476c-5e00-427c-8221-c65eb889ca3c-operator-scripts\") pod \"cinder-9cf9-account-create-update-dgcgh\" (UID: \"831d476c-5e00-427c-8221-c65eb889ca3c\") " pod="openstack/cinder-9cf9-account-create-update-dgcgh" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.802811 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b0f76e-f680-4432-acaf-3bb47c0dea49-operator-scripts\") pod \"cinder-db-create-bkzrr\" (UID: \"e3b0f76e-f680-4432-acaf-3bb47c0dea49\") " pod="openstack/cinder-db-create-bkzrr" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.903876 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831d476c-5e00-427c-8221-c65eb889ca3c-operator-scripts\") pod \"cinder-9cf9-account-create-update-dgcgh\" (UID: \"831d476c-5e00-427c-8221-c65eb889ca3c\") " pod="openstack/cinder-9cf9-account-create-update-dgcgh" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.903999 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b0f76e-f680-4432-acaf-3bb47c0dea49-operator-scripts\") pod \"cinder-db-create-bkzrr\" (UID: \"e3b0f76e-f680-4432-acaf-3bb47c0dea49\") " pod="openstack/cinder-db-create-bkzrr" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.904035 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mghcl\" (UniqueName: \"kubernetes.io/projected/e3b0f76e-f680-4432-acaf-3bb47c0dea49-kube-api-access-mghcl\") pod \"cinder-db-create-bkzrr\" (UID: \"e3b0f76e-f680-4432-acaf-3bb47c0dea49\") " pod="openstack/cinder-db-create-bkzrr" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.904063 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsr7k\" (UniqueName: \"kubernetes.io/projected/831d476c-5e00-427c-8221-c65eb889ca3c-kube-api-access-qsr7k\") pod \"cinder-9cf9-account-create-update-dgcgh\" (UID: \"831d476c-5e00-427c-8221-c65eb889ca3c\") " pod="openstack/cinder-9cf9-account-create-update-dgcgh" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.904869 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831d476c-5e00-427c-8221-c65eb889ca3c-operator-scripts\") pod \"cinder-9cf9-account-create-update-dgcgh\" (UID: \"831d476c-5e00-427c-8221-c65eb889ca3c\") " pod="openstack/cinder-9cf9-account-create-update-dgcgh" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.904949 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b0f76e-f680-4432-acaf-3bb47c0dea49-operator-scripts\") pod \"cinder-db-create-bkzrr\" (UID: \"e3b0f76e-f680-4432-acaf-3bb47c0dea49\") " pod="openstack/cinder-db-create-bkzrr" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.923843 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsr7k\" (UniqueName: \"kubernetes.io/projected/831d476c-5e00-427c-8221-c65eb889ca3c-kube-api-access-qsr7k\") pod \"cinder-9cf9-account-create-update-dgcgh\" (UID: \"831d476c-5e00-427c-8221-c65eb889ca3c\") " pod="openstack/cinder-9cf9-account-create-update-dgcgh" Dec 05 08:16:35 crc kubenswrapper[4780]: I1205 08:16:35.925342 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mghcl\" (UniqueName: \"kubernetes.io/projected/e3b0f76e-f680-4432-acaf-3bb47c0dea49-kube-api-access-mghcl\") pod \"cinder-db-create-bkzrr\" (UID: \"e3b0f76e-f680-4432-acaf-3bb47c0dea49\") " pod="openstack/cinder-db-create-bkzrr" Dec 05 08:16:36 crc kubenswrapper[4780]: I1205 08:16:36.026564 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bkzrr" Dec 05 08:16:36 crc kubenswrapper[4780]: I1205 08:16:36.037151 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9cf9-account-create-update-dgcgh" Dec 05 08:16:36 crc kubenswrapper[4780]: I1205 08:16:36.468989 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bkzrr"] Dec 05 08:16:36 crc kubenswrapper[4780]: I1205 08:16:36.519473 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9cf9-account-create-update-dgcgh"] Dec 05 08:16:37 crc kubenswrapper[4780]: I1205 08:16:37.004439 4780 generic.go:334] "Generic (PLEG): container finished" podID="e3b0f76e-f680-4432-acaf-3bb47c0dea49" containerID="e3f8b439f935b50eab6e3ca0e82f447b3c65a1d0058bddba2d63c1eb62a87e11" exitCode=0 Dec 05 08:16:37 crc kubenswrapper[4780]: I1205 08:16:37.004485 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bkzrr" event={"ID":"e3b0f76e-f680-4432-acaf-3bb47c0dea49","Type":"ContainerDied","Data":"e3f8b439f935b50eab6e3ca0e82f447b3c65a1d0058bddba2d63c1eb62a87e11"} Dec 05 08:16:37 crc kubenswrapper[4780]: I1205 08:16:37.004532 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bkzrr" event={"ID":"e3b0f76e-f680-4432-acaf-3bb47c0dea49","Type":"ContainerStarted","Data":"ae60a97035a92381364e298f1717943083801df7b03f82d32fdf02478f1ffc99"} Dec 05 08:16:37 crc kubenswrapper[4780]: I1205 08:16:37.006493 4780 generic.go:334] "Generic (PLEG): container finished" podID="831d476c-5e00-427c-8221-c65eb889ca3c" containerID="af61b0ce60dfb6455de26a5dfecf64061d2e5259b52168d4c49063ba6317fbf0" exitCode=0 Dec 05 08:16:37 crc kubenswrapper[4780]: I1205 08:16:37.006565 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9cf9-account-create-update-dgcgh" event={"ID":"831d476c-5e00-427c-8221-c65eb889ca3c","Type":"ContainerDied","Data":"af61b0ce60dfb6455de26a5dfecf64061d2e5259b52168d4c49063ba6317fbf0"} Dec 05 08:16:37 crc kubenswrapper[4780]: I1205 08:16:37.007036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9cf9-account-create-update-dgcgh" event={"ID":"831d476c-5e00-427c-8221-c65eb889ca3c","Type":"ContainerStarted","Data":"a12a8b3d2108b5ced86fecf4bb7f8450965c148f1dbcc369f82ed177f7bf9d75"} Dec 05 08:16:37 crc kubenswrapper[4780]: I1205 08:16:37.139370 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:16:37 crc kubenswrapper[4780]: E1205 08:16:37.139651 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.394683 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bkzrr" Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.399720 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9cf9-account-create-update-dgcgh" Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.447790 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mghcl\" (UniqueName: \"kubernetes.io/projected/e3b0f76e-f680-4432-acaf-3bb47c0dea49-kube-api-access-mghcl\") pod \"e3b0f76e-f680-4432-acaf-3bb47c0dea49\" (UID: \"e3b0f76e-f680-4432-acaf-3bb47c0dea49\") " Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.447954 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831d476c-5e00-427c-8221-c65eb889ca3c-operator-scripts\") pod \"831d476c-5e00-427c-8221-c65eb889ca3c\" (UID: \"831d476c-5e00-427c-8221-c65eb889ca3c\") " Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.448014 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsr7k\" (UniqueName: \"kubernetes.io/projected/831d476c-5e00-427c-8221-c65eb889ca3c-kube-api-access-qsr7k\") pod \"831d476c-5e00-427c-8221-c65eb889ca3c\" (UID: \"831d476c-5e00-427c-8221-c65eb889ca3c\") " Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.448090 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b0f76e-f680-4432-acaf-3bb47c0dea49-operator-scripts\") pod \"e3b0f76e-f680-4432-acaf-3bb47c0dea49\" (UID: \"e3b0f76e-f680-4432-acaf-3bb47c0dea49\") " Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.448949 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b0f76e-f680-4432-acaf-3bb47c0dea49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3b0f76e-f680-4432-acaf-3bb47c0dea49" (UID: "e3b0f76e-f680-4432-acaf-3bb47c0dea49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.449484 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/831d476c-5e00-427c-8221-c65eb889ca3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "831d476c-5e00-427c-8221-c65eb889ca3c" (UID: "831d476c-5e00-427c-8221-c65eb889ca3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.463334 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b0f76e-f680-4432-acaf-3bb47c0dea49-kube-api-access-mghcl" (OuterVolumeSpecName: "kube-api-access-mghcl") pod "e3b0f76e-f680-4432-acaf-3bb47c0dea49" (UID: "e3b0f76e-f680-4432-acaf-3bb47c0dea49"). InnerVolumeSpecName "kube-api-access-mghcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.464058 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/831d476c-5e00-427c-8221-c65eb889ca3c-kube-api-access-qsr7k" (OuterVolumeSpecName: "kube-api-access-qsr7k") pod "831d476c-5e00-427c-8221-c65eb889ca3c" (UID: "831d476c-5e00-427c-8221-c65eb889ca3c"). InnerVolumeSpecName "kube-api-access-qsr7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.549857 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3b0f76e-f680-4432-acaf-3bb47c0dea49-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.549901 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mghcl\" (UniqueName: \"kubernetes.io/projected/e3b0f76e-f680-4432-acaf-3bb47c0dea49-kube-api-access-mghcl\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.549913 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/831d476c-5e00-427c-8221-c65eb889ca3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:38 crc kubenswrapper[4780]: I1205 08:16:38.549921 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsr7k\" (UniqueName: \"kubernetes.io/projected/831d476c-5e00-427c-8221-c65eb889ca3c-kube-api-access-qsr7k\") on node \"crc\" DevicePath \"\"" Dec 05 08:16:39 crc kubenswrapper[4780]: I1205 08:16:39.025657 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9cf9-account-create-update-dgcgh" event={"ID":"831d476c-5e00-427c-8221-c65eb889ca3c","Type":"ContainerDied","Data":"a12a8b3d2108b5ced86fecf4bb7f8450965c148f1dbcc369f82ed177f7bf9d75"} Dec 05 08:16:39 crc kubenswrapper[4780]: I1205 08:16:39.025686 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9cf9-account-create-update-dgcgh" Dec 05 08:16:39 crc kubenswrapper[4780]: I1205 08:16:39.025701 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12a8b3d2108b5ced86fecf4bb7f8450965c148f1dbcc369f82ed177f7bf9d75" Dec 05 08:16:39 crc kubenswrapper[4780]: I1205 08:16:39.027019 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bkzrr" event={"ID":"e3b0f76e-f680-4432-acaf-3bb47c0dea49","Type":"ContainerDied","Data":"ae60a97035a92381364e298f1717943083801df7b03f82d32fdf02478f1ffc99"} Dec 05 08:16:39 crc kubenswrapper[4780]: I1205 08:16:39.027040 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae60a97035a92381364e298f1717943083801df7b03f82d32fdf02478f1ffc99" Dec 05 08:16:39 crc kubenswrapper[4780]: I1205 08:16:39.027079 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bkzrr" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.865401 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-j9ks4"] Dec 05 08:16:40 crc kubenswrapper[4780]: E1205 08:16:40.866090 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="831d476c-5e00-427c-8221-c65eb889ca3c" containerName="mariadb-account-create-update" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.866103 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="831d476c-5e00-427c-8221-c65eb889ca3c" containerName="mariadb-account-create-update" Dec 05 08:16:40 crc kubenswrapper[4780]: E1205 08:16:40.866118 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b0f76e-f680-4432-acaf-3bb47c0dea49" containerName="mariadb-database-create" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.866125 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b0f76e-f680-4432-acaf-3bb47c0dea49" containerName="mariadb-database-create" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.866311 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b0f76e-f680-4432-acaf-3bb47c0dea49" containerName="mariadb-database-create" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.866326 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="831d476c-5e00-427c-8221-c65eb889ca3c" containerName="mariadb-account-create-update" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.866994 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.870090 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.870120 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.870571 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pk7v7" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.874677 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-j9ks4"] Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.992519 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-combined-ca-bundle\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.992562 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-config-data\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.992593 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-scripts\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.992742 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d153a814-ed3a-44f1-baef-9c922f0cb899-etc-machine-id\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.992783 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qrg7\" (UniqueName: \"kubernetes.io/projected/d153a814-ed3a-44f1-baef-9c922f0cb899-kube-api-access-2qrg7\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:40 crc kubenswrapper[4780]: I1205 08:16:40.993077 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-db-sync-config-data\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.094726 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-combined-ca-bundle\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.096171 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-config-data\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.096351 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-scripts\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.096542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qrg7\" (UniqueName: \"kubernetes.io/projected/d153a814-ed3a-44f1-baef-9c922f0cb899-kube-api-access-2qrg7\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.096728 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d153a814-ed3a-44f1-baef-9c922f0cb899-etc-machine-id\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.096954 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d153a814-ed3a-44f1-baef-9c922f0cb899-etc-machine-id\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.097250 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-db-sync-config-data\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.101227 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-scripts\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.101532 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-db-sync-config-data\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.101567 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-combined-ca-bundle\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.102600 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-config-data\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.113687 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qrg7\" (UniqueName: \"kubernetes.io/projected/d153a814-ed3a-44f1-baef-9c922f0cb899-kube-api-access-2qrg7\") pod \"cinder-db-sync-j9ks4\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.202967 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:16:41 crc kubenswrapper[4780]: I1205 08:16:41.633926 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-j9ks4"] Dec 05 08:16:42 crc kubenswrapper[4780]: I1205 08:16:42.052037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j9ks4" event={"ID":"d153a814-ed3a-44f1-baef-9c922f0cb899","Type":"ContainerStarted","Data":"32a90586e7f177fab7d9bebc4c36510335f0f6cd722000dda46ceee826e9c196"} Dec 05 08:16:50 crc kubenswrapper[4780]: I1205 08:16:50.138625 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:16:50 crc kubenswrapper[4780]: E1205 08:16:50.139969 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.142435 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dlwsj"] Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.144763 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.151447 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dlwsj"] Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.259176 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-catalog-content\") pod \"redhat-marketplace-dlwsj\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.259355 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-utilities\") pod \"redhat-marketplace-dlwsj\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.259380 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj4nv\" (UniqueName: \"kubernetes.io/projected/a9952530-e5e0-4206-97e1-6d97beea33a9-kube-api-access-zj4nv\") pod \"redhat-marketplace-dlwsj\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.360817 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-catalog-content\") pod \"redhat-marketplace-dlwsj\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.360952 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-utilities\") pod \"redhat-marketplace-dlwsj\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.360979 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj4nv\" (UniqueName: \"kubernetes.io/projected/a9952530-e5e0-4206-97e1-6d97beea33a9-kube-api-access-zj4nv\") pod \"redhat-marketplace-dlwsj\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.361375 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-catalog-content\") pod \"redhat-marketplace-dlwsj\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.361444 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-utilities\") pod \"redhat-marketplace-dlwsj\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.384264 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj4nv\" (UniqueName: \"kubernetes.io/projected/a9952530-e5e0-4206-97e1-6d97beea33a9-kube-api-access-zj4nv\") pod \"redhat-marketplace-dlwsj\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:16:59 crc kubenswrapper[4780]: I1205 08:16:59.473092 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:17:01 crc kubenswrapper[4780]: I1205 08:17:01.491699 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dlwsj"] Dec 05 08:17:01 crc kubenswrapper[4780]: W1205 08:17:01.497933 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9952530_e5e0_4206_97e1_6d97beea33a9.slice/crio-6a6095c5c594fd73b60c385c723aa2d58c9496700b5ed5aad13ccf28f828707c WatchSource:0}: Error finding container 6a6095c5c594fd73b60c385c723aa2d58c9496700b5ed5aad13ccf28f828707c: Status 404 returned error can't find the container with id 6a6095c5c594fd73b60c385c723aa2d58c9496700b5ed5aad13ccf28f828707c Dec 05 08:17:02 crc kubenswrapper[4780]: I1205 08:17:02.138747 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:17:02 crc kubenswrapper[4780]: E1205 08:17:02.139468 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:17:02 crc kubenswrapper[4780]: I1205 08:17:02.305598 4780 generic.go:334] "Generic (PLEG): container finished" podID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerID="4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384" exitCode=0 Dec 05 08:17:02 crc kubenswrapper[4780]: I1205 08:17:02.305651 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dlwsj" event={"ID":"a9952530-e5e0-4206-97e1-6d97beea33a9","Type":"ContainerDied","Data":"4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384"} Dec 05 08:17:02 crc kubenswrapper[4780]: I1205 08:17:02.306177 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dlwsj" event={"ID":"a9952530-e5e0-4206-97e1-6d97beea33a9","Type":"ContainerStarted","Data":"6a6095c5c594fd73b60c385c723aa2d58c9496700b5ed5aad13ccf28f828707c"} Dec 05 08:17:02 crc kubenswrapper[4780]: I1205 08:17:02.308991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j9ks4" event={"ID":"d153a814-ed3a-44f1-baef-9c922f0cb899","Type":"ContainerStarted","Data":"fda019b011078bab8834b929dced50472039854968088816e4bf09a075c4a0c4"} Dec 05 08:17:02 crc kubenswrapper[4780]: I1205 08:17:02.350311 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-j9ks4" podStartSLOduration=2.850843308 podStartE2EDuration="22.350295824s" podCreationTimestamp="2025-12-05 08:16:40 +0000 UTC" firstStartedPulling="2025-12-05 08:16:41.636679235 +0000 UTC m=+5435.706195567" lastFinishedPulling="2025-12-05 08:17:01.136131751 +0000 UTC m=+5455.205648083" observedRunningTime="2025-12-05 08:17:02.343275673 +0000 UTC m=+5456.412792005" watchObservedRunningTime="2025-12-05 08:17:02.350295824 +0000 UTC m=+5456.419812156" Dec 05 08:17:03 crc kubenswrapper[4780]: I1205 08:17:03.321218 4780 generic.go:334] "Generic (PLEG): container finished" podID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerID="64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec" exitCode=0 Dec 05 08:17:03 crc kubenswrapper[4780]: I1205 08:17:03.321417 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dlwsj" event={"ID":"a9952530-e5e0-4206-97e1-6d97beea33a9","Type":"ContainerDied","Data":"64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec"} Dec 05 08:17:04 crc kubenswrapper[4780]: I1205 08:17:04.342840 4780 generic.go:334] "Generic (PLEG): container finished" podID="d153a814-ed3a-44f1-baef-9c922f0cb899" containerID="fda019b011078bab8834b929dced50472039854968088816e4bf09a075c4a0c4" exitCode=0 Dec 05 08:17:04 crc kubenswrapper[4780]: I1205 08:17:04.342959 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j9ks4" event={"ID":"d153a814-ed3a-44f1-baef-9c922f0cb899","Type":"ContainerDied","Data":"fda019b011078bab8834b929dced50472039854968088816e4bf09a075c4a0c4"} Dec 05 08:17:04 crc kubenswrapper[4780]: I1205 08:17:04.345823 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dlwsj" event={"ID":"a9952530-e5e0-4206-97e1-6d97beea33a9","Type":"ContainerStarted","Data":"44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263"} Dec 05 08:17:04 crc kubenswrapper[4780]: I1205 08:17:04.393648 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dlwsj" podStartSLOduration=3.956704188 podStartE2EDuration="5.393617221s" podCreationTimestamp="2025-12-05 08:16:59 +0000 UTC" firstStartedPulling="2025-12-05 08:17:02.308381994 +0000 UTC m=+5456.377898336" lastFinishedPulling="2025-12-05 08:17:03.745295047 +0000 UTC m=+5457.814811369" observedRunningTime="2025-12-05 08:17:04.386415945 +0000 UTC m=+5458.455932327" watchObservedRunningTime="2025-12-05 08:17:04.393617221 +0000 UTC m=+5458.463133593" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.631105 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.778741 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-scripts\") pod \"d153a814-ed3a-44f1-baef-9c922f0cb899\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.778810 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-db-sync-config-data\") pod \"d153a814-ed3a-44f1-baef-9c922f0cb899\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.778901 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-combined-ca-bundle\") pod \"d153a814-ed3a-44f1-baef-9c922f0cb899\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.778930 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-config-data\") pod \"d153a814-ed3a-44f1-baef-9c922f0cb899\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.778987 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d153a814-ed3a-44f1-baef-9c922f0cb899-etc-machine-id\") pod \"d153a814-ed3a-44f1-baef-9c922f0cb899\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.779050 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qrg7\" (UniqueName: \"kubernetes.io/projected/d153a814-ed3a-44f1-baef-9c922f0cb899-kube-api-access-2qrg7\") pod \"d153a814-ed3a-44f1-baef-9c922f0cb899\" (UID: \"d153a814-ed3a-44f1-baef-9c922f0cb899\") " Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.779653 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d153a814-ed3a-44f1-baef-9c922f0cb899-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d153a814-ed3a-44f1-baef-9c922f0cb899" (UID: "d153a814-ed3a-44f1-baef-9c922f0cb899"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.779911 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d153a814-ed3a-44f1-baef-9c922f0cb899-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.783743 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-scripts" (OuterVolumeSpecName: "scripts") pod "d153a814-ed3a-44f1-baef-9c922f0cb899" (UID: "d153a814-ed3a-44f1-baef-9c922f0cb899"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.783898 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d153a814-ed3a-44f1-baef-9c922f0cb899" (UID: "d153a814-ed3a-44f1-baef-9c922f0cb899"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.784206 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d153a814-ed3a-44f1-baef-9c922f0cb899-kube-api-access-2qrg7" (OuterVolumeSpecName: "kube-api-access-2qrg7") pod "d153a814-ed3a-44f1-baef-9c922f0cb899" (UID: "d153a814-ed3a-44f1-baef-9c922f0cb899"). InnerVolumeSpecName "kube-api-access-2qrg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.805942 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d153a814-ed3a-44f1-baef-9c922f0cb899" (UID: "d153a814-ed3a-44f1-baef-9c922f0cb899"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.825021 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-config-data" (OuterVolumeSpecName: "config-data") pod "d153a814-ed3a-44f1-baef-9c922f0cb899" (UID: "d153a814-ed3a-44f1-baef-9c922f0cb899"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.881059 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.881088 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.881099 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.881109 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d153a814-ed3a-44f1-baef-9c922f0cb899-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:05 crc kubenswrapper[4780]: I1205 08:17:05.881117 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qrg7\" (UniqueName: \"kubernetes.io/projected/d153a814-ed3a-44f1-baef-9c922f0cb899-kube-api-access-2qrg7\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.363080 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j9ks4" event={"ID":"d153a814-ed3a-44f1-baef-9c922f0cb899","Type":"ContainerDied","Data":"32a90586e7f177fab7d9bebc4c36510335f0f6cd722000dda46ceee826e9c196"} Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.363422 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a90586e7f177fab7d9bebc4c36510335f0f6cd722000dda46ceee826e9c196" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.363138 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j9ks4" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.687961 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d8b9ddbf7-4wtjp"] Dec 05 08:17:06 crc kubenswrapper[4780]: E1205 08:17:06.688422 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d153a814-ed3a-44f1-baef-9c922f0cb899" containerName="cinder-db-sync" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.688440 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d153a814-ed3a-44f1-baef-9c922f0cb899" containerName="cinder-db-sync" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.688659 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d153a814-ed3a-44f1-baef-9c922f0cb899" containerName="cinder-db-sync" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.690039 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.703857 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d8b9ddbf7-4wtjp"] Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.818767 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnndp\" (UniqueName: \"kubernetes.io/projected/96a512b2-ee45-4b1e-bb25-53f11184d533-kube-api-access-jnndp\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.818918 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-config\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.818959 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-sb\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.819009 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-nb\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.819133 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-dns-svc\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.853668 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.855775 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.858320 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.858601 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.858781 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pk7v7" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.859870 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.887636 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.926869 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-nb\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.926995 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-dns-svc\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.927057 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnndp\" (UniqueName: \"kubernetes.io/projected/96a512b2-ee45-4b1e-bb25-53f11184d533-kube-api-access-jnndp\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.927119 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-config\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.927152 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-sb\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.928161 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-sb\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.931708 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-dns-svc\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.934412 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-config\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.937548 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-nb\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:06 crc kubenswrapper[4780]: I1205 08:17:06.972198 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnndp\" (UniqueName: \"kubernetes.io/projected/96a512b2-ee45-4b1e-bb25-53f11184d533-kube-api-access-jnndp\") pod \"dnsmasq-dns-d8b9ddbf7-4wtjp\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.013055 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.028898 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-scripts\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.028967 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data-custom\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.029046 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qmwg\" (UniqueName: \"kubernetes.io/projected/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-kube-api-access-8qmwg\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.029107 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.029185 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-etc-machine-id\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.029233 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.029250 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-logs\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.131291 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.131633 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-etc-machine-id\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.131682 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.131703 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-logs\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.131724 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-etc-machine-id\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.131841 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-scripts\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.131911 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data-custom\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.131986 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qmwg\" (UniqueName: \"kubernetes.io/projected/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-kube-api-access-8qmwg\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.132605 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-logs\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.135053 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-scripts\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.135800 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.135812 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.138893 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data-custom\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.151356 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qmwg\" (UniqueName: \"kubernetes.io/projected/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-kube-api-access-8qmwg\") pod \"cinder-api-0\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.191537 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 08:17:07 crc kubenswrapper[4780]: W1205 08:17:07.523435 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96a512b2_ee45_4b1e_bb25_53f11184d533.slice/crio-7b5287ddadb229963dc99afb6921d89fc32214201df6db9eeeec05014cc40d02 WatchSource:0}: Error finding container 7b5287ddadb229963dc99afb6921d89fc32214201df6db9eeeec05014cc40d02: Status 404 returned error can't find the container with id 7b5287ddadb229963dc99afb6921d89fc32214201df6db9eeeec05014cc40d02 Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.524362 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d8b9ddbf7-4wtjp"] Dec 05 08:17:07 crc kubenswrapper[4780]: I1205 08:17:07.661657 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:08 crc kubenswrapper[4780]: I1205 08:17:08.388560 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"14181d20-f8b8-4e06-a5d4-4d28c8eaec07","Type":"ContainerStarted","Data":"0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e"} Dec 05 08:17:08 crc kubenswrapper[4780]: I1205 08:17:08.388847 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"14181d20-f8b8-4e06-a5d4-4d28c8eaec07","Type":"ContainerStarted","Data":"c71ec415756531c36fc4588fb88f9cf6f96ed2f63c001b80bb79864616959ea9"} Dec 05 08:17:08 crc kubenswrapper[4780]: I1205 08:17:08.391683 4780 generic.go:334] "Generic (PLEG): container finished" podID="96a512b2-ee45-4b1e-bb25-53f11184d533" containerID="6e26139e0b7aa87aacc04368f0516752b07cd3432fc1fa2e6107726f08f2f7c0" exitCode=0 Dec 05 08:17:08 crc kubenswrapper[4780]: I1205 08:17:08.391726 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" event={"ID":"96a512b2-ee45-4b1e-bb25-53f11184d533","Type":"ContainerDied","Data":"6e26139e0b7aa87aacc04368f0516752b07cd3432fc1fa2e6107726f08f2f7c0"} Dec 05 08:17:08 crc kubenswrapper[4780]: I1205 08:17:08.391774 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" event={"ID":"96a512b2-ee45-4b1e-bb25-53f11184d533","Type":"ContainerStarted","Data":"7b5287ddadb229963dc99afb6921d89fc32214201df6db9eeeec05014cc40d02"} Dec 05 08:17:09 crc kubenswrapper[4780]: I1205 08:17:09.403603 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" event={"ID":"96a512b2-ee45-4b1e-bb25-53f11184d533","Type":"ContainerStarted","Data":"40a48118eaeb20ea32c39b619db2fb7c593b0621d9bdd9cee19d65f38268fa3a"} Dec 05 08:17:09 crc kubenswrapper[4780]: I1205 08:17:09.404235 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:09 crc kubenswrapper[4780]: I1205 08:17:09.405775 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"14181d20-f8b8-4e06-a5d4-4d28c8eaec07","Type":"ContainerStarted","Data":"5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3"} Dec 05 08:17:09 crc kubenswrapper[4780]: I1205 08:17:09.405942 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 08:17:09 crc kubenswrapper[4780]: I1205 08:17:09.446872 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" podStartSLOduration=3.446848563 podStartE2EDuration="3.446848563s" podCreationTimestamp="2025-12-05 08:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:17:09.43460057 +0000 UTC m=+5463.504116932" watchObservedRunningTime="2025-12-05 08:17:09.446848563 +0000 UTC m=+5463.516364895" Dec 05 08:17:09 crc kubenswrapper[4780]: I1205 08:17:09.469199 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:09 crc kubenswrapper[4780]: I1205 08:17:09.475122 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:17:09 crc kubenswrapper[4780]: I1205 08:17:09.476956 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:17:09 crc kubenswrapper[4780]: I1205 08:17:09.481213 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.481183826 podStartE2EDuration="3.481183826s" podCreationTimestamp="2025-12-05 08:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:17:09.475099601 +0000 UTC m=+5463.544615933" watchObservedRunningTime="2025-12-05 08:17:09.481183826 +0000 UTC m=+5463.550700158" Dec 05 08:17:09 crc kubenswrapper[4780]: I1205 08:17:09.536330 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:17:10 crc kubenswrapper[4780]: I1205 08:17:10.465487 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:17:11 crc kubenswrapper[4780]: I1205 08:17:11.421494 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" containerName="cinder-api-log" containerID="cri-o://0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e" gracePeriod=30 Dec 05 08:17:11 crc kubenswrapper[4780]: I1205 08:17:11.421537 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" containerName="cinder-api" containerID="cri-o://5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3" gracePeriod=30 Dec 05 08:17:11 crc kubenswrapper[4780]: I1205 08:17:11.940265 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.033139 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-etc-machine-id\") pod \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.033234 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qmwg\" (UniqueName: \"kubernetes.io/projected/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-kube-api-access-8qmwg\") pod \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.033313 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data\") pod \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.033335 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data-custom\") pod \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.033531 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-scripts\") pod \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.033576 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-combined-ca-bundle\") pod \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.033614 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-logs\") pod \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\" (UID: \"14181d20-f8b8-4e06-a5d4-4d28c8eaec07\") " Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.034556 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-logs" (OuterVolumeSpecName: "logs") pod "14181d20-f8b8-4e06-a5d4-4d28c8eaec07" (UID: "14181d20-f8b8-4e06-a5d4-4d28c8eaec07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.034976 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "14181d20-f8b8-4e06-a5d4-4d28c8eaec07" (UID: "14181d20-f8b8-4e06-a5d4-4d28c8eaec07"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.035312 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.035334 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.039629 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-kube-api-access-8qmwg" (OuterVolumeSpecName: "kube-api-access-8qmwg") pod "14181d20-f8b8-4e06-a5d4-4d28c8eaec07" (UID: "14181d20-f8b8-4e06-a5d4-4d28c8eaec07"). InnerVolumeSpecName "kube-api-access-8qmwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.039986 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "14181d20-f8b8-4e06-a5d4-4d28c8eaec07" (UID: "14181d20-f8b8-4e06-a5d4-4d28c8eaec07"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.040105 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-scripts" (OuterVolumeSpecName: "scripts") pod "14181d20-f8b8-4e06-a5d4-4d28c8eaec07" (UID: "14181d20-f8b8-4e06-a5d4-4d28c8eaec07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.059540 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14181d20-f8b8-4e06-a5d4-4d28c8eaec07" (UID: "14181d20-f8b8-4e06-a5d4-4d28c8eaec07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.078621 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data" (OuterVolumeSpecName: "config-data") pod "14181d20-f8b8-4e06-a5d4-4d28c8eaec07" (UID: "14181d20-f8b8-4e06-a5d4-4d28c8eaec07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.137413 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qmwg\" (UniqueName: \"kubernetes.io/projected/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-kube-api-access-8qmwg\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.137636 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.137744 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.137818 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.137900 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14181d20-f8b8-4e06-a5d4-4d28c8eaec07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.429845 4780 generic.go:334] "Generic (PLEG): container finished" podID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" containerID="5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3" exitCode=0 Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.429910 4780 generic.go:334] "Generic (PLEG): container finished" podID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" containerID="0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e" exitCode=143 Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.429953 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.429968 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"14181d20-f8b8-4e06-a5d4-4d28c8eaec07","Type":"ContainerDied","Data":"5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3"} Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.430021 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"14181d20-f8b8-4e06-a5d4-4d28c8eaec07","Type":"ContainerDied","Data":"0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e"} Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.430036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"14181d20-f8b8-4e06-a5d4-4d28c8eaec07","Type":"ContainerDied","Data":"c71ec415756531c36fc4588fb88f9cf6f96ed2f63c001b80bb79864616959ea9"} Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.430057 4780 scope.go:117] "RemoveContainer" containerID="5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.455010 4780 scope.go:117] "RemoveContainer" containerID="0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.457397 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.474530 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.481773 4780 scope.go:117] "RemoveContainer" containerID="5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3" Dec 05 08:17:12 crc kubenswrapper[4780]: E1205 08:17:12.482351 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3\": container with ID starting with 5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3 not found: ID does not exist" containerID="5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.482405 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3"} err="failed to get container status \"5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3\": rpc error: code = NotFound desc = could not find container \"5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3\": container with ID starting with 5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3 not found: ID does not exist" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.482443 4780 scope.go:117] "RemoveContainer" containerID="0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e" Dec 05 08:17:12 crc kubenswrapper[4780]: E1205 08:17:12.482805 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e\": container with ID starting with 0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e not found: ID does not exist" containerID="0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.482835 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e"} err="failed to get container status \"0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e\": rpc error: code = NotFound desc = could not find container \"0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e\": container with ID starting with 0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e not found: ID does not exist" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.482857 4780 scope.go:117] "RemoveContainer" containerID="5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.483143 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3"} err="failed to get container status \"5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3\": rpc error: code = NotFound desc = could not find container \"5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3\": container with ID starting with 5b00ffd8710b393a62209443b23d30168b20f19cce6fdfab437e79d8f3ed6be3 not found: ID does not exist" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.483171 4780 scope.go:117] "RemoveContainer" containerID="0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.483466 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e"} err="failed to get container status \"0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e\": rpc error: code = NotFound desc = could not find container \"0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e\": container with ID starting with 0b06b71b80ba83b1de4521c452fe3cdb636e60f926a5ebecb73942e38c421a6e not found: ID does not exist" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.486408 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:12 crc kubenswrapper[4780]: E1205 08:17:12.486991 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" containerName="cinder-api-log" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.487018 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" containerName="cinder-api-log" Dec 05 08:17:12 crc kubenswrapper[4780]: E1205 08:17:12.487046 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" containerName="cinder-api" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.487054 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" containerName="cinder-api" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.487235 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" containerName="cinder-api-log" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.487270 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" containerName="cinder-api" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.488487 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.492490 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.492907 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.493284 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.493414 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.494113 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.494745 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pk7v7" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.495008 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.650771 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-logs\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.650829 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data-custom\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.650971 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.651010 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.651037 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjlm\" (UniqueName: \"kubernetes.io/projected/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-kube-api-access-5xjlm\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.651119 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.651173 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.651201 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.651246 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-scripts\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.719436 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dlwsj"] Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.753108 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.753163 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.753189 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xjlm\" (UniqueName: \"kubernetes.io/projected/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-kube-api-access-5xjlm\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.753240 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.753810 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.753864 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.753907 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-scripts\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.753957 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-logs\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.753993 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data-custom\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.754076 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.755068 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-logs\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.758455 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.758607 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-scripts\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.758678 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.759186 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.759949 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.771610 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data-custom\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.773722 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xjlm\" (UniqueName: \"kubernetes.io/projected/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-kube-api-access-5xjlm\") pod \"cinder-api-0\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " pod="openstack/cinder-api-0" Dec 05 08:17:12 crc kubenswrapper[4780]: I1205 08:17:12.814804 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 08:17:13 crc kubenswrapper[4780]: I1205 08:17:13.139352 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:17:13 crc kubenswrapper[4780]: E1205 08:17:13.139670 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:17:13 crc kubenswrapper[4780]: I1205 08:17:13.234456 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:13 crc kubenswrapper[4780]: I1205 08:17:13.440635 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"92f01f18-c1da-41c3-9fe9-1bf4c45418c3","Type":"ContainerStarted","Data":"1be87400ca027dce4927969a8ce9d452e4056c8db29980548fdc0d0adbae0702"} Dec 05 08:17:13 crc kubenswrapper[4780]: I1205 08:17:13.442274 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dlwsj" podUID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerName="registry-server" containerID="cri-o://44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263" gracePeriod=2 Dec 05 08:17:13 crc kubenswrapper[4780]: I1205 08:17:13.910295 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.082162 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-utilities\") pod \"a9952530-e5e0-4206-97e1-6d97beea33a9\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.082232 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj4nv\" (UniqueName: \"kubernetes.io/projected/a9952530-e5e0-4206-97e1-6d97beea33a9-kube-api-access-zj4nv\") pod \"a9952530-e5e0-4206-97e1-6d97beea33a9\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.082355 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-catalog-content\") pod \"a9952530-e5e0-4206-97e1-6d97beea33a9\" (UID: \"a9952530-e5e0-4206-97e1-6d97beea33a9\") " Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.089336 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-utilities" (OuterVolumeSpecName: "utilities") pod "a9952530-e5e0-4206-97e1-6d97beea33a9" (UID: "a9952530-e5e0-4206-97e1-6d97beea33a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.091642 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9952530-e5e0-4206-97e1-6d97beea33a9-kube-api-access-zj4nv" (OuterVolumeSpecName: "kube-api-access-zj4nv") pod "a9952530-e5e0-4206-97e1-6d97beea33a9" (UID: "a9952530-e5e0-4206-97e1-6d97beea33a9"). InnerVolumeSpecName "kube-api-access-zj4nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.112101 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9952530-e5e0-4206-97e1-6d97beea33a9" (UID: "a9952530-e5e0-4206-97e1-6d97beea33a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.152237 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14181d20-f8b8-4e06-a5d4-4d28c8eaec07" path="/var/lib/kubelet/pods/14181d20-f8b8-4e06-a5d4-4d28c8eaec07/volumes" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.184560 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.184606 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj4nv\" (UniqueName: \"kubernetes.io/projected/a9952530-e5e0-4206-97e1-6d97beea33a9-kube-api-access-zj4nv\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.184622 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9952530-e5e0-4206-97e1-6d97beea33a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.456230 4780 generic.go:334] "Generic (PLEG): container finished" podID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerID="44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263" exitCode=0 Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.456301 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dlwsj" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.456320 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dlwsj" event={"ID":"a9952530-e5e0-4206-97e1-6d97beea33a9","Type":"ContainerDied","Data":"44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263"} Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.457236 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dlwsj" event={"ID":"a9952530-e5e0-4206-97e1-6d97beea33a9","Type":"ContainerDied","Data":"6a6095c5c594fd73b60c385c723aa2d58c9496700b5ed5aad13ccf28f828707c"} Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.457258 4780 scope.go:117] "RemoveContainer" containerID="44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.466851 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"92f01f18-c1da-41c3-9fe9-1bf4c45418c3","Type":"ContainerStarted","Data":"19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03"} Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.466924 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"92f01f18-c1da-41c3-9fe9-1bf4c45418c3","Type":"ContainerStarted","Data":"42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41"} Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.467121 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.488924 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dlwsj"] Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.498081 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dlwsj"] Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.504423 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.504402223 podStartE2EDuration="2.504402223s" podCreationTimestamp="2025-12-05 08:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:17:14.504337281 +0000 UTC m=+5468.573853623" watchObservedRunningTime="2025-12-05 08:17:14.504402223 +0000 UTC m=+5468.573918555" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.505303 4780 scope.go:117] "RemoveContainer" containerID="64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.530757 4780 scope.go:117] "RemoveContainer" containerID="4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.565887 4780 scope.go:117] "RemoveContainer" containerID="44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263" Dec 05 08:17:14 crc kubenswrapper[4780]: E1205 08:17:14.566260 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263\": container with ID starting with 44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263 not found: ID does not exist" containerID="44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.566311 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263"} err="failed to get container status \"44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263\": rpc error: code = NotFound desc = could not find container \"44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263\": container with ID starting with 44212e24beca677d9d0bc28c0bd7543fa9de42da457d0783b38f823938ccd263 not found: ID does not exist" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.566345 4780 scope.go:117] "RemoveContainer" containerID="64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec" Dec 05 08:17:14 crc kubenswrapper[4780]: E1205 08:17:14.566897 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec\": container with ID starting with 64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec not found: ID does not exist" containerID="64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.566937 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec"} err="failed to get container status \"64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec\": rpc error: code = NotFound desc = could not find container \"64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec\": container with ID starting with 64655cf6e2af0ed7e5e4bc62c05ba477af08312daa256bad8ab1ca67dfcd87ec not found: ID does not exist" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.566965 4780 scope.go:117] "RemoveContainer" containerID="4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384" Dec 05 08:17:14 crc kubenswrapper[4780]: E1205 08:17:14.567252 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384\": container with ID starting with 4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384 not found: ID does not exist" containerID="4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384" Dec 05 08:17:14 crc kubenswrapper[4780]: I1205 08:17:14.567283 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384"} err="failed to get container status \"4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384\": rpc error: code = NotFound desc = could not find container \"4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384\": container with ID starting with 4c778a343781c6cd81444a849198e7d1b58ff92634efc4a87fcb14e2fb016384 not found: ID does not exist" Dec 05 08:17:16 crc kubenswrapper[4780]: I1205 08:17:16.148434 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9952530-e5e0-4206-97e1-6d97beea33a9" path="/var/lib/kubelet/pods/a9952530-e5e0-4206-97e1-6d97beea33a9/volumes" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.015043 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.095647 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b67b458c7-282fv"] Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.096016 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" podUID="005d187e-0a30-4c2b-8480-c9dd9c53fcd7" containerName="dnsmasq-dns" containerID="cri-o://c8155498f5450db309d81d7857e0bd505348d241b244901951a7e91190a8c862" gracePeriod=10 Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.500828 4780 generic.go:334] "Generic (PLEG): container finished" podID="005d187e-0a30-4c2b-8480-c9dd9c53fcd7" containerID="c8155498f5450db309d81d7857e0bd505348d241b244901951a7e91190a8c862" exitCode=0 Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.501187 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" event={"ID":"005d187e-0a30-4c2b-8480-c9dd9c53fcd7","Type":"ContainerDied","Data":"c8155498f5450db309d81d7857e0bd505348d241b244901951a7e91190a8c862"} Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.501213 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" event={"ID":"005d187e-0a30-4c2b-8480-c9dd9c53fcd7","Type":"ContainerDied","Data":"e69353d16fcbaa62f2d57bc7660f35e81c8cc7f11949c3f45bc943a782e07273"} Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.501225 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e69353d16fcbaa62f2d57bc7660f35e81c8cc7f11949c3f45bc943a782e07273" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.559073 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.749173 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnqvb\" (UniqueName: \"kubernetes.io/projected/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-kube-api-access-rnqvb\") pod \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.749240 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-sb\") pod \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.749345 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-config\") pod \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.749415 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-nb\") pod \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.749479 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-dns-svc\") pod \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\" (UID: \"005d187e-0a30-4c2b-8480-c9dd9c53fcd7\") " Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.754484 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-kube-api-access-rnqvb" (OuterVolumeSpecName: "kube-api-access-rnqvb") pod "005d187e-0a30-4c2b-8480-c9dd9c53fcd7" (UID: "005d187e-0a30-4c2b-8480-c9dd9c53fcd7"). InnerVolumeSpecName "kube-api-access-rnqvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.798262 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-config" (OuterVolumeSpecName: "config") pod "005d187e-0a30-4c2b-8480-c9dd9c53fcd7" (UID: "005d187e-0a30-4c2b-8480-c9dd9c53fcd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.802065 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "005d187e-0a30-4c2b-8480-c9dd9c53fcd7" (UID: "005d187e-0a30-4c2b-8480-c9dd9c53fcd7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.804072 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "005d187e-0a30-4c2b-8480-c9dd9c53fcd7" (UID: "005d187e-0a30-4c2b-8480-c9dd9c53fcd7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.813387 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "005d187e-0a30-4c2b-8480-c9dd9c53fcd7" (UID: "005d187e-0a30-4c2b-8480-c9dd9c53fcd7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.851460 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.851488 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.851499 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.851512 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnqvb\" (UniqueName: \"kubernetes.io/projected/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-kube-api-access-rnqvb\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:17 crc kubenswrapper[4780]: I1205 08:17:17.851523 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/005d187e-0a30-4c2b-8480-c9dd9c53fcd7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:18 crc kubenswrapper[4780]: I1205 08:17:18.508508 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b67b458c7-282fv" Dec 05 08:17:18 crc kubenswrapper[4780]: I1205 08:17:18.537513 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b67b458c7-282fv"] Dec 05 08:17:18 crc kubenswrapper[4780]: I1205 08:17:18.544629 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b67b458c7-282fv"] Dec 05 08:17:20 crc kubenswrapper[4780]: I1205 08:17:20.153901 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005d187e-0a30-4c2b-8480-c9dd9c53fcd7" path="/var/lib/kubelet/pods/005d187e-0a30-4c2b-8480-c9dd9c53fcd7/volumes" Dec 05 08:17:24 crc kubenswrapper[4780]: I1205 08:17:24.139077 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:17:24 crc kubenswrapper[4780]: E1205 08:17:24.139619 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:17:24 crc kubenswrapper[4780]: I1205 08:17:24.709755 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 05 08:17:25 crc kubenswrapper[4780]: I1205 08:17:25.559173 4780 scope.go:117] "RemoveContainer" containerID="975196b32aa253c6dcf1678c7d1bbb570926fe291bf4e3a828c112e173d21bc7" Dec 05 08:17:37 crc kubenswrapper[4780]: I1205 08:17:37.138868 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:17:37 crc kubenswrapper[4780]: E1205 08:17:37.139943 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.525655 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 08:17:41 crc kubenswrapper[4780]: E1205 08:17:41.526525 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005d187e-0a30-4c2b-8480-c9dd9c53fcd7" containerName="init" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.526538 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="005d187e-0a30-4c2b-8480-c9dd9c53fcd7" containerName="init" Dec 05 08:17:41 crc kubenswrapper[4780]: E1205 08:17:41.526547 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005d187e-0a30-4c2b-8480-c9dd9c53fcd7" containerName="dnsmasq-dns" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.526553 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="005d187e-0a30-4c2b-8480-c9dd9c53fcd7" containerName="dnsmasq-dns" Dec 05 08:17:41 crc kubenswrapper[4780]: E1205 08:17:41.526569 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerName="extract-content" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.526575 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerName="extract-content" Dec 05 08:17:41 crc kubenswrapper[4780]: E1205 08:17:41.526588 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerName="registry-server" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.526594 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerName="registry-server" Dec 05 08:17:41 crc kubenswrapper[4780]: E1205 08:17:41.526618 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerName="extract-utilities" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.526624 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerName="extract-utilities" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.526778 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9952530-e5e0-4206-97e1-6d97beea33a9" containerName="registry-server" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.526791 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="005d187e-0a30-4c2b-8480-c9dd9c53fcd7" containerName="dnsmasq-dns" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.527688 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.529774 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.542798 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.604570 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-scripts\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.604636 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.604654 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.604704 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfxsn\" (UniqueName: \"kubernetes.io/projected/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-kube-api-access-gfxsn\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.605172 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.605254 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.707211 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.707270 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-scripts\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.707321 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.707344 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.707414 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfxsn\" (UniqueName: \"kubernetes.io/projected/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-kube-api-access-gfxsn\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.707504 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.708194 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.714416 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.714677 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.714907 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.715600 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-scripts\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.732492 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfxsn\" (UniqueName: \"kubernetes.io/projected/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-kube-api-access-gfxsn\") pod \"cinder-scheduler-0\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:41 crc kubenswrapper[4780]: I1205 08:17:41.845567 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 08:17:42 crc kubenswrapper[4780]: I1205 08:17:42.282860 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 08:17:42 crc kubenswrapper[4780]: I1205 08:17:42.630334 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:42 crc kubenswrapper[4780]: I1205 08:17:42.630557 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerName="cinder-api-log" containerID="cri-o://42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41" gracePeriod=30 Dec 05 08:17:42 crc kubenswrapper[4780]: I1205 08:17:42.630619 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerName="cinder-api" containerID="cri-o://19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03" gracePeriod=30 Dec 05 08:17:42 crc kubenswrapper[4780]: I1205 08:17:42.748790 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be","Type":"ContainerStarted","Data":"24d319f69d920d1451805f4a2e080a4a6a3a60d878c0e0c28404d8c8341a0d31"} Dec 05 08:17:43 crc kubenswrapper[4780]: I1205 08:17:43.757613 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be","Type":"ContainerStarted","Data":"95eeaed5d97bd4b7733b9f2b60a25b8b8eebbe7868b32074b10d39c8f8864e4b"} Dec 05 08:17:43 crc kubenswrapper[4780]: I1205 08:17:43.757918 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be","Type":"ContainerStarted","Data":"f44a6680e5fcd84b90a3a3c16aeb2f7932a001a960bc64de6c6b0cf232e34647"} Dec 05 08:17:43 crc kubenswrapper[4780]: I1205 08:17:43.759591 4780 generic.go:334] "Generic (PLEG): container finished" podID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerID="42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41" exitCode=143 Dec 05 08:17:43 crc kubenswrapper[4780]: I1205 08:17:43.759632 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"92f01f18-c1da-41c3-9fe9-1bf4c45418c3","Type":"ContainerDied","Data":"42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41"} Dec 05 08:17:43 crc kubenswrapper[4780]: I1205 08:17:43.782144 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.50291335 podStartE2EDuration="2.782126944s" podCreationTimestamp="2025-12-05 08:17:41 +0000 UTC" firstStartedPulling="2025-12-05 08:17:42.291095941 +0000 UTC m=+5496.360612273" lastFinishedPulling="2025-12-05 08:17:42.570309545 +0000 UTC m=+5496.639825867" observedRunningTime="2025-12-05 08:17:43.775628488 +0000 UTC m=+5497.845144830" watchObservedRunningTime="2025-12-05 08:17:43.782126944 +0000 UTC m=+5497.851643276" Dec 05 08:17:45 crc kubenswrapper[4780]: I1205 08:17:45.759811 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.50:8776/healthcheck\": read tcp 10.217.0.2:40256->10.217.1.50:8776: read: connection reset by peer" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.175335 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.281959 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-combined-ca-bundle\") pod \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.282105 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data-custom\") pod \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.282140 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-internal-tls-certs\") pod \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.282230 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-scripts\") pod \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.282269 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data\") pod \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.282299 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-public-tls-certs\") pod \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.282324 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-logs\") pod \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.282364 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xjlm\" (UniqueName: \"kubernetes.io/projected/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-kube-api-access-5xjlm\") pod \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.282415 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-etc-machine-id\") pod \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\" (UID: \"92f01f18-c1da-41c3-9fe9-1bf4c45418c3\") " Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.284398 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-logs" (OuterVolumeSpecName: "logs") pod "92f01f18-c1da-41c3-9fe9-1bf4c45418c3" (UID: "92f01f18-c1da-41c3-9fe9-1bf4c45418c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.285148 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "92f01f18-c1da-41c3-9fe9-1bf4c45418c3" (UID: "92f01f18-c1da-41c3-9fe9-1bf4c45418c3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.293405 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "92f01f18-c1da-41c3-9fe9-1bf4c45418c3" (UID: "92f01f18-c1da-41c3-9fe9-1bf4c45418c3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.293474 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-kube-api-access-5xjlm" (OuterVolumeSpecName: "kube-api-access-5xjlm") pod "92f01f18-c1da-41c3-9fe9-1bf4c45418c3" (UID: "92f01f18-c1da-41c3-9fe9-1bf4c45418c3"). InnerVolumeSpecName "kube-api-access-5xjlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.293733 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-scripts" (OuterVolumeSpecName: "scripts") pod "92f01f18-c1da-41c3-9fe9-1bf4c45418c3" (UID: "92f01f18-c1da-41c3-9fe9-1bf4c45418c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.341958 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "92f01f18-c1da-41c3-9fe9-1bf4c45418c3" (UID: "92f01f18-c1da-41c3-9fe9-1bf4c45418c3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.348675 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "92f01f18-c1da-41c3-9fe9-1bf4c45418c3" (UID: "92f01f18-c1da-41c3-9fe9-1bf4c45418c3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.358087 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92f01f18-c1da-41c3-9fe9-1bf4c45418c3" (UID: "92f01f18-c1da-41c3-9fe9-1bf4c45418c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.372028 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data" (OuterVolumeSpecName: "config-data") pod "92f01f18-c1da-41c3-9fe9-1bf4c45418c3" (UID: "92f01f18-c1da-41c3-9fe9-1bf4c45418c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.384560 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.384607 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.384624 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.384636 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.384647 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xjlm\" (UniqueName: \"kubernetes.io/projected/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-kube-api-access-5xjlm\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.384659 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.384670 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.384681 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.384692 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92f01f18-c1da-41c3-9fe9-1bf4c45418c3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.785098 4780 generic.go:334] "Generic (PLEG): container finished" podID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerID="19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03" exitCode=0 Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.785141 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"92f01f18-c1da-41c3-9fe9-1bf4c45418c3","Type":"ContainerDied","Data":"19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03"} Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.785143 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.785172 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"92f01f18-c1da-41c3-9fe9-1bf4c45418c3","Type":"ContainerDied","Data":"1be87400ca027dce4927969a8ce9d452e4056c8db29980548fdc0d0adbae0702"} Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.785182 4780 scope.go:117] "RemoveContainer" containerID="19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.813236 4780 scope.go:117] "RemoveContainer" containerID="42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.818191 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.828070 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.838588 4780 scope.go:117] "RemoveContainer" containerID="19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03" Dec 05 08:17:46 crc kubenswrapper[4780]: E1205 08:17:46.839290 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03\": container with ID starting with 19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03 not found: ID does not exist" containerID="19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.839374 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03"} err="failed to get container status \"19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03\": rpc error: code = NotFound desc = could not find container \"19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03\": container with ID starting with 19056e7d81afdd82909162b4eff53e150adb2fedc2086f607835e6686bc8cc03 not found: ID does not exist" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.839409 4780 scope.go:117] "RemoveContainer" containerID="42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41" Dec 05 08:17:46 crc kubenswrapper[4780]: E1205 08:17:46.839730 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41\": container with ID starting with 42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41 not found: ID does not exist" containerID="42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.839766 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41"} err="failed to get container status \"42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41\": rpc error: code = NotFound desc = could not find container \"42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41\": container with ID starting with 42f65ad1730d6cd9f773b99bf1e2fe3a13fca9c81ff1834106914fd27f6f2f41 not found: ID does not exist" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.845900 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:46 crc kubenswrapper[4780]: E1205 08:17:46.847363 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerName="cinder-api-log" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.847394 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerName="cinder-api-log" Dec 05 08:17:46 crc kubenswrapper[4780]: E1205 08:17:46.847441 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerName="cinder-api" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.847450 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerName="cinder-api" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.847650 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerName="cinder-api-log" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.847676 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" containerName="cinder-api" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.848678 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.848870 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.850993 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.851120 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.851349 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.856858 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.995613 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.995705 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0d923c-9a65-4436-bb37-eda463dd8de7-logs\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.995764 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.995948 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.995996 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-scripts\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.996047 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmlc4\" (UniqueName: \"kubernetes.io/projected/4e0d923c-9a65-4436-bb37-eda463dd8de7-kube-api-access-mmlc4\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.996206 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e0d923c-9a65-4436-bb37-eda463dd8de7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.996258 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-config-data\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:46 crc kubenswrapper[4780]: I1205 08:17:46.996301 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-config-data-custom\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.098243 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-config-data\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.099143 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-config-data-custom\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.099231 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.099304 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0d923c-9a65-4436-bb37-eda463dd8de7-logs\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.099370 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.099451 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.099483 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-scripts\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.099527 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmlc4\" (UniqueName: \"kubernetes.io/projected/4e0d923c-9a65-4436-bb37-eda463dd8de7-kube-api-access-mmlc4\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.099703 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e0d923c-9a65-4436-bb37-eda463dd8de7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.099790 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e0d923c-9a65-4436-bb37-eda463dd8de7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.099855 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0d923c-9a65-4436-bb37-eda463dd8de7-logs\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.103178 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-config-data-custom\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.103822 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-config-data\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.103820 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-scripts\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.106248 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.106334 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.106674 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0d923c-9a65-4436-bb37-eda463dd8de7-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.115087 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmlc4\" (UniqueName: \"kubernetes.io/projected/4e0d923c-9a65-4436-bb37-eda463dd8de7-kube-api-access-mmlc4\") pod \"cinder-api-0\" (UID: \"4e0d923c-9a65-4436-bb37-eda463dd8de7\") " pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.198971 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 08:17:47 crc kubenswrapper[4780]: I1205 08:17:47.778147 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 08:17:47 crc kubenswrapper[4780]: W1205 08:17:47.790901 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e0d923c_9a65_4436_bb37_eda463dd8de7.slice/crio-2c95764a0a34af9605d5aa7807f827cc5d27a4025b4c6a007d4e6813a006c0a9 WatchSource:0}: Error finding container 2c95764a0a34af9605d5aa7807f827cc5d27a4025b4c6a007d4e6813a006c0a9: Status 404 returned error can't find the container with id 2c95764a0a34af9605d5aa7807f827cc5d27a4025b4c6a007d4e6813a006c0a9 Dec 05 08:17:48 crc kubenswrapper[4780]: I1205 08:17:48.149553 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f01f18-c1da-41c3-9fe9-1bf4c45418c3" path="/var/lib/kubelet/pods/92f01f18-c1da-41c3-9fe9-1bf4c45418c3/volumes" Dec 05 08:17:48 crc kubenswrapper[4780]: I1205 08:17:48.802023 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4e0d923c-9a65-4436-bb37-eda463dd8de7","Type":"ContainerStarted","Data":"fd05dc3d3601e371077fb649962c8aa4667fec67306247e4a184a0933336de8b"} Dec 05 08:17:48 crc kubenswrapper[4780]: I1205 08:17:48.802068 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4e0d923c-9a65-4436-bb37-eda463dd8de7","Type":"ContainerStarted","Data":"53ca7674a6b68ae825ab4f2e6b345040ca317421bbee5c50c2ac2dbc4319623a"} Dec 05 08:17:48 crc kubenswrapper[4780]: I1205 08:17:48.802079 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4e0d923c-9a65-4436-bb37-eda463dd8de7","Type":"ContainerStarted","Data":"2c95764a0a34af9605d5aa7807f827cc5d27a4025b4c6a007d4e6813a006c0a9"} Dec 05 08:17:48 crc kubenswrapper[4780]: I1205 08:17:48.803406 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 08:17:48 crc kubenswrapper[4780]: I1205 08:17:48.829397 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.829377864 podStartE2EDuration="2.829377864s" podCreationTimestamp="2025-12-05 08:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:17:48.821194592 +0000 UTC m=+5502.890710914" watchObservedRunningTime="2025-12-05 08:17:48.829377864 +0000 UTC m=+5502.898894196" Dec 05 08:17:49 crc kubenswrapper[4780]: I1205 08:17:49.139343 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:17:49 crc kubenswrapper[4780]: E1205 08:17:49.139837 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:17:52 crc kubenswrapper[4780]: I1205 08:17:52.063784 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 08:17:52 crc kubenswrapper[4780]: I1205 08:17:52.118272 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 08:17:52 crc kubenswrapper[4780]: I1205 08:17:52.835280 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" containerName="cinder-scheduler" containerID="cri-o://f44a6680e5fcd84b90a3a3c16aeb2f7932a001a960bc64de6c6b0cf232e34647" gracePeriod=30 Dec 05 08:17:52 crc kubenswrapper[4780]: I1205 08:17:52.835330 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" containerName="probe" containerID="cri-o://95eeaed5d97bd4b7733b9f2b60a25b8b8eebbe7868b32074b10d39c8f8864e4b" gracePeriod=30 Dec 05 08:17:53 crc kubenswrapper[4780]: I1205 08:17:53.876984 4780 generic.go:334] "Generic (PLEG): container finished" podID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" containerID="95eeaed5d97bd4b7733b9f2b60a25b8b8eebbe7868b32074b10d39c8f8864e4b" exitCode=0 Dec 05 08:17:53 crc kubenswrapper[4780]: I1205 08:17:53.877325 4780 generic.go:334] "Generic (PLEG): container finished" podID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" containerID="f44a6680e5fcd84b90a3a3c16aeb2f7932a001a960bc64de6c6b0cf232e34647" exitCode=0 Dec 05 08:17:53 crc kubenswrapper[4780]: I1205 08:17:53.877079 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be","Type":"ContainerDied","Data":"95eeaed5d97bd4b7733b9f2b60a25b8b8eebbe7868b32074b10d39c8f8864e4b"} Dec 05 08:17:53 crc kubenswrapper[4780]: I1205 08:17:53.877374 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be","Type":"ContainerDied","Data":"f44a6680e5fcd84b90a3a3c16aeb2f7932a001a960bc64de6c6b0cf232e34647"} Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.083462 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.256953 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data\") pod \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.257431 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-combined-ca-bundle\") pod \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.257478 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-scripts\") pod \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.257527 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data-custom\") pod \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.257656 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-etc-machine-id\") pod \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.257684 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfxsn\" (UniqueName: \"kubernetes.io/projected/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-kube-api-access-gfxsn\") pod \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\" (UID: \"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be\") " Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.259792 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" (UID: "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.264859 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" (UID: "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.274076 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-scripts" (OuterVolumeSpecName: "scripts") pod "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" (UID: "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.275408 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-kube-api-access-gfxsn" (OuterVolumeSpecName: "kube-api-access-gfxsn") pod "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" (UID: "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be"). InnerVolumeSpecName "kube-api-access-gfxsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.308864 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" (UID: "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.345054 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data" (OuterVolumeSpecName: "config-data") pod "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" (UID: "eb782623-d4d9-4e76-aeaa-e6ed62ebd2be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.360274 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.360320 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.360334 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.360345 4780 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.360358 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfxsn\" (UniqueName: \"kubernetes.io/projected/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-kube-api-access-gfxsn\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.360371 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.891464 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eb782623-d4d9-4e76-aeaa-e6ed62ebd2be","Type":"ContainerDied","Data":"24d319f69d920d1451805f4a2e080a4a6a3a60d878c0e0c28404d8c8341a0d31"} Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.891515 4780 scope.go:117] "RemoveContainer" containerID="95eeaed5d97bd4b7733b9f2b60a25b8b8eebbe7868b32074b10d39c8f8864e4b" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.891636 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.932305 4780 scope.go:117] "RemoveContainer" containerID="f44a6680e5fcd84b90a3a3c16aeb2f7932a001a960bc64de6c6b0cf232e34647" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.935037 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.955393 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.972340 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 08:17:54 crc kubenswrapper[4780]: E1205 08:17:54.973000 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" containerName="probe" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.973027 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" containerName="probe" Dec 05 08:17:54 crc kubenswrapper[4780]: E1205 08:17:54.973046 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" containerName="cinder-scheduler" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.973053 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" containerName="cinder-scheduler" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.973276 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" containerName="cinder-scheduler" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.973293 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" containerName="probe" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.974453 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.978483 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 08:17:54 crc kubenswrapper[4780]: I1205 08:17:54.980165 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.073978 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-scripts\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.074046 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-config-data\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.074097 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k6l7\" (UniqueName: \"kubernetes.io/projected/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-kube-api-access-2k6l7\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.074200 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.074407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.074507 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.175944 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.176007 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.176090 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-scripts\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.176110 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-config-data\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.176131 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k6l7\" (UniqueName: \"kubernetes.io/projected/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-kube-api-access-2k6l7\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.176153 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.176561 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.180933 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.182066 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-config-data\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.185806 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-scripts\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.187009 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.193702 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k6l7\" (UniqueName: \"kubernetes.io/projected/3636634a-6a80-4605-89e3-6b2f3f4e6f0c-kube-api-access-2k6l7\") pod \"cinder-scheduler-0\" (UID: \"3636634a-6a80-4605-89e3-6b2f3f4e6f0c\") " pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.292309 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.711852 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 08:17:55 crc kubenswrapper[4780]: I1205 08:17:55.911060 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3636634a-6a80-4605-89e3-6b2f3f4e6f0c","Type":"ContainerStarted","Data":"bc75197f05a63108acf41e7be8975da7381fc307aaf8d30295a71cf46a1e77e2"} Dec 05 08:17:56 crc kubenswrapper[4780]: I1205 08:17:56.148925 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb782623-d4d9-4e76-aeaa-e6ed62ebd2be" path="/var/lib/kubelet/pods/eb782623-d4d9-4e76-aeaa-e6ed62ebd2be/volumes" Dec 05 08:17:56 crc kubenswrapper[4780]: I1205 08:17:56.921285 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3636634a-6a80-4605-89e3-6b2f3f4e6f0c","Type":"ContainerStarted","Data":"77ffdc06355277601b9f101289c5e033ced1222ffbcd22cc7d8e19e4b0d58477"} Dec 05 08:17:56 crc kubenswrapper[4780]: I1205 08:17:56.921624 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3636634a-6a80-4605-89e3-6b2f3f4e6f0c","Type":"ContainerStarted","Data":"bea15177da2c9dd2435b78dc5bfaf5ecf2882b13134861ebef1718fdf771c7ab"} Dec 05 08:17:56 crc kubenswrapper[4780]: I1205 08:17:56.948645 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.948625629 podStartE2EDuration="2.948625629s" podCreationTimestamp="2025-12-05 08:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:17:56.944424225 +0000 UTC m=+5511.013940567" watchObservedRunningTime="2025-12-05 08:17:56.948625629 +0000 UTC m=+5511.018141971" Dec 05 08:17:59 crc kubenswrapper[4780]: I1205 08:17:59.131916 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 05 08:18:00 crc kubenswrapper[4780]: I1205 08:18:00.292932 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 08:18:04 crc kubenswrapper[4780]: I1205 08:18:04.138675 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:18:04 crc kubenswrapper[4780]: E1205 08:18:04.139417 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:18:05 crc kubenswrapper[4780]: I1205 08:18:05.519364 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.124867 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8pbvc"] Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.126568 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pbvc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.134866 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8pbvc"] Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.235516 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ccf1-account-create-update-ztgdc"] Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.237211 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ccf1-account-create-update-ztgdc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.239362 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.248318 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5kl\" (UniqueName: \"kubernetes.io/projected/c5e66d54-de82-4ba3-b098-bef82a296ac1-kube-api-access-vp5kl\") pod \"glance-db-create-8pbvc\" (UID: \"c5e66d54-de82-4ba3-b098-bef82a296ac1\") " pod="openstack/glance-db-create-8pbvc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.248417 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5e66d54-de82-4ba3-b098-bef82a296ac1-operator-scripts\") pod \"glance-db-create-8pbvc\" (UID: \"c5e66d54-de82-4ba3-b098-bef82a296ac1\") " pod="openstack/glance-db-create-8pbvc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.248602 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ccf1-account-create-update-ztgdc"] Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.350279 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5e66d54-de82-4ba3-b098-bef82a296ac1-operator-scripts\") pod \"glance-db-create-8pbvc\" (UID: \"c5e66d54-de82-4ba3-b098-bef82a296ac1\") " pod="openstack/glance-db-create-8pbvc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.350424 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5kl\" (UniqueName: \"kubernetes.io/projected/c5e66d54-de82-4ba3-b098-bef82a296ac1-kube-api-access-vp5kl\") pod \"glance-db-create-8pbvc\" (UID: \"c5e66d54-de82-4ba3-b098-bef82a296ac1\") " pod="openstack/glance-db-create-8pbvc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.350464 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b4dce0-7ba9-44a4-8a74-69df0962589c-operator-scripts\") pod \"glance-ccf1-account-create-update-ztgdc\" (UID: \"87b4dce0-7ba9-44a4-8a74-69df0962589c\") " pod="openstack/glance-ccf1-account-create-update-ztgdc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.350510 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26sds\" (UniqueName: \"kubernetes.io/projected/87b4dce0-7ba9-44a4-8a74-69df0962589c-kube-api-access-26sds\") pod \"glance-ccf1-account-create-update-ztgdc\" (UID: \"87b4dce0-7ba9-44a4-8a74-69df0962589c\") " pod="openstack/glance-ccf1-account-create-update-ztgdc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.351365 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5e66d54-de82-4ba3-b098-bef82a296ac1-operator-scripts\") pod \"glance-db-create-8pbvc\" (UID: \"c5e66d54-de82-4ba3-b098-bef82a296ac1\") " pod="openstack/glance-db-create-8pbvc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.370094 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5kl\" (UniqueName: \"kubernetes.io/projected/c5e66d54-de82-4ba3-b098-bef82a296ac1-kube-api-access-vp5kl\") pod \"glance-db-create-8pbvc\" (UID: \"c5e66d54-de82-4ba3-b098-bef82a296ac1\") " pod="openstack/glance-db-create-8pbvc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.449022 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pbvc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.453258 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b4dce0-7ba9-44a4-8a74-69df0962589c-operator-scripts\") pod \"glance-ccf1-account-create-update-ztgdc\" (UID: \"87b4dce0-7ba9-44a4-8a74-69df0962589c\") " pod="openstack/glance-ccf1-account-create-update-ztgdc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.453329 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26sds\" (UniqueName: \"kubernetes.io/projected/87b4dce0-7ba9-44a4-8a74-69df0962589c-kube-api-access-26sds\") pod \"glance-ccf1-account-create-update-ztgdc\" (UID: \"87b4dce0-7ba9-44a4-8a74-69df0962589c\") " pod="openstack/glance-ccf1-account-create-update-ztgdc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.454902 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b4dce0-7ba9-44a4-8a74-69df0962589c-operator-scripts\") pod \"glance-ccf1-account-create-update-ztgdc\" (UID: \"87b4dce0-7ba9-44a4-8a74-69df0962589c\") " pod="openstack/glance-ccf1-account-create-update-ztgdc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.470288 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26sds\" (UniqueName: \"kubernetes.io/projected/87b4dce0-7ba9-44a4-8a74-69df0962589c-kube-api-access-26sds\") pod \"glance-ccf1-account-create-update-ztgdc\" (UID: \"87b4dce0-7ba9-44a4-8a74-69df0962589c\") " pod="openstack/glance-ccf1-account-create-update-ztgdc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.554729 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ccf1-account-create-update-ztgdc" Dec 05 08:18:06 crc kubenswrapper[4780]: I1205 08:18:06.921285 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8pbvc"] Dec 05 08:18:06 crc kubenswrapper[4780]: W1205 08:18:06.922538 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5e66d54_de82_4ba3_b098_bef82a296ac1.slice/crio-6eb3b922cca23db2b6976b36ada7108f83a5cdca85a96bf3acd154b4114c171c WatchSource:0}: Error finding container 6eb3b922cca23db2b6976b36ada7108f83a5cdca85a96bf3acd154b4114c171c: Status 404 returned error can't find the container with id 6eb3b922cca23db2b6976b36ada7108f83a5cdca85a96bf3acd154b4114c171c Dec 05 08:18:07 crc kubenswrapper[4780]: I1205 08:18:07.019164 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8pbvc" event={"ID":"c5e66d54-de82-4ba3-b098-bef82a296ac1","Type":"ContainerStarted","Data":"6eb3b922cca23db2b6976b36ada7108f83a5cdca85a96bf3acd154b4114c171c"} Dec 05 08:18:07 crc kubenswrapper[4780]: I1205 08:18:07.024104 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ccf1-account-create-update-ztgdc"] Dec 05 08:18:07 crc kubenswrapper[4780]: W1205 08:18:07.034873 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b4dce0_7ba9_44a4_8a74_69df0962589c.slice/crio-d13e866e975ec2ea85fedb04623fcdb5ef72900e86290dbba279c1ec245b921d WatchSource:0}: Error finding container d13e866e975ec2ea85fedb04623fcdb5ef72900e86290dbba279c1ec245b921d: Status 404 returned error can't find the container with id d13e866e975ec2ea85fedb04623fcdb5ef72900e86290dbba279c1ec245b921d Dec 05 08:18:08 crc kubenswrapper[4780]: I1205 08:18:08.029150 4780 generic.go:334] "Generic (PLEG): container finished" podID="c5e66d54-de82-4ba3-b098-bef82a296ac1" containerID="a0a68414e6cacab39e8c0e6d0f3246f3175a6834f26b0d28c757a8cb9f2ceab5" exitCode=0 Dec 05 08:18:08 crc kubenswrapper[4780]: I1205 08:18:08.029185 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8pbvc" event={"ID":"c5e66d54-de82-4ba3-b098-bef82a296ac1","Type":"ContainerDied","Data":"a0a68414e6cacab39e8c0e6d0f3246f3175a6834f26b0d28c757a8cb9f2ceab5"} Dec 05 08:18:08 crc kubenswrapper[4780]: I1205 08:18:08.030804 4780 generic.go:334] "Generic (PLEG): container finished" podID="87b4dce0-7ba9-44a4-8a74-69df0962589c" containerID="38ee1f784b056fd21096d24b7de81d7376077792b5597ba752312dfe8d2cff49" exitCode=0 Dec 05 08:18:08 crc kubenswrapper[4780]: I1205 08:18:08.030835 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ccf1-account-create-update-ztgdc" event={"ID":"87b4dce0-7ba9-44a4-8a74-69df0962589c","Type":"ContainerDied","Data":"38ee1f784b056fd21096d24b7de81d7376077792b5597ba752312dfe8d2cff49"} Dec 05 08:18:08 crc kubenswrapper[4780]: I1205 08:18:08.030890 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ccf1-account-create-update-ztgdc" event={"ID":"87b4dce0-7ba9-44a4-8a74-69df0962589c","Type":"ContainerStarted","Data":"d13e866e975ec2ea85fedb04623fcdb5ef72900e86290dbba279c1ec245b921d"} Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.375207 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ccf1-account-create-update-ztgdc" Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.381860 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pbvc" Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.401307 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5e66d54-de82-4ba3-b098-bef82a296ac1-operator-scripts\") pod \"c5e66d54-de82-4ba3-b098-bef82a296ac1\" (UID: \"c5e66d54-de82-4ba3-b098-bef82a296ac1\") " Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.402230 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5e66d54-de82-4ba3-b098-bef82a296ac1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5e66d54-de82-4ba3-b098-bef82a296ac1" (UID: "c5e66d54-de82-4ba3-b098-bef82a296ac1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.402258 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp5kl\" (UniqueName: \"kubernetes.io/projected/c5e66d54-de82-4ba3-b098-bef82a296ac1-kube-api-access-vp5kl\") pod \"c5e66d54-de82-4ba3-b098-bef82a296ac1\" (UID: \"c5e66d54-de82-4ba3-b098-bef82a296ac1\") " Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.402351 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b4dce0-7ba9-44a4-8a74-69df0962589c-operator-scripts\") pod \"87b4dce0-7ba9-44a4-8a74-69df0962589c\" (UID: \"87b4dce0-7ba9-44a4-8a74-69df0962589c\") " Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.402399 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26sds\" (UniqueName: \"kubernetes.io/projected/87b4dce0-7ba9-44a4-8a74-69df0962589c-kube-api-access-26sds\") pod \"87b4dce0-7ba9-44a4-8a74-69df0962589c\" (UID: \"87b4dce0-7ba9-44a4-8a74-69df0962589c\") " Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.402800 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5e66d54-de82-4ba3-b098-bef82a296ac1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.402943 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b4dce0-7ba9-44a4-8a74-69df0962589c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87b4dce0-7ba9-44a4-8a74-69df0962589c" (UID: "87b4dce0-7ba9-44a4-8a74-69df0962589c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.410037 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e66d54-de82-4ba3-b098-bef82a296ac1-kube-api-access-vp5kl" (OuterVolumeSpecName: "kube-api-access-vp5kl") pod "c5e66d54-de82-4ba3-b098-bef82a296ac1" (UID: "c5e66d54-de82-4ba3-b098-bef82a296ac1"). InnerVolumeSpecName "kube-api-access-vp5kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.417113 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b4dce0-7ba9-44a4-8a74-69df0962589c-kube-api-access-26sds" (OuterVolumeSpecName: "kube-api-access-26sds") pod "87b4dce0-7ba9-44a4-8a74-69df0962589c" (UID: "87b4dce0-7ba9-44a4-8a74-69df0962589c"). InnerVolumeSpecName "kube-api-access-26sds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.504818 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b4dce0-7ba9-44a4-8a74-69df0962589c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.505141 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26sds\" (UniqueName: \"kubernetes.io/projected/87b4dce0-7ba9-44a4-8a74-69df0962589c-kube-api-access-26sds\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:09 crc kubenswrapper[4780]: I1205 08:18:09.505153 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp5kl\" (UniqueName: \"kubernetes.io/projected/c5e66d54-de82-4ba3-b098-bef82a296ac1-kube-api-access-vp5kl\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:10 crc kubenswrapper[4780]: I1205 08:18:10.052770 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8pbvc" event={"ID":"c5e66d54-de82-4ba3-b098-bef82a296ac1","Type":"ContainerDied","Data":"6eb3b922cca23db2b6976b36ada7108f83a5cdca85a96bf3acd154b4114c171c"} Dec 05 08:18:10 crc kubenswrapper[4780]: I1205 08:18:10.052818 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eb3b922cca23db2b6976b36ada7108f83a5cdca85a96bf3acd154b4114c171c" Dec 05 08:18:10 crc kubenswrapper[4780]: I1205 08:18:10.053208 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8pbvc" Dec 05 08:18:10 crc kubenswrapper[4780]: I1205 08:18:10.059578 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ccf1-account-create-update-ztgdc" event={"ID":"87b4dce0-7ba9-44a4-8a74-69df0962589c","Type":"ContainerDied","Data":"d13e866e975ec2ea85fedb04623fcdb5ef72900e86290dbba279c1ec245b921d"} Dec 05 08:18:10 crc kubenswrapper[4780]: I1205 08:18:10.059836 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d13e866e975ec2ea85fedb04623fcdb5ef72900e86290dbba279c1ec245b921d" Dec 05 08:18:10 crc kubenswrapper[4780]: I1205 08:18:10.060024 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ccf1-account-create-update-ztgdc" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.376589 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mwwvl"] Dec 05 08:18:11 crc kubenswrapper[4780]: E1205 08:18:11.377027 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b4dce0-7ba9-44a4-8a74-69df0962589c" containerName="mariadb-account-create-update" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.377043 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b4dce0-7ba9-44a4-8a74-69df0962589c" containerName="mariadb-account-create-update" Dec 05 08:18:11 crc kubenswrapper[4780]: E1205 08:18:11.377078 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e66d54-de82-4ba3-b098-bef82a296ac1" containerName="mariadb-database-create" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.377086 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e66d54-de82-4ba3-b098-bef82a296ac1" containerName="mariadb-database-create" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.377279 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e66d54-de82-4ba3-b098-bef82a296ac1" containerName="mariadb-database-create" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.377311 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b4dce0-7ba9-44a4-8a74-69df0962589c" containerName="mariadb-account-create-update" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.378993 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.381223 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.383161 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f4bjw" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.388928 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mwwvl"] Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.543690 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-db-sync-config-data\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.543769 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-config-data\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.543858 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rcgb\" (UniqueName: \"kubernetes.io/projected/4843dc8e-84df-44ba-a1f8-8c626bac3df8-kube-api-access-7rcgb\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.543921 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-combined-ca-bundle\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.646073 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-db-sync-config-data\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.646150 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-config-data\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.646233 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rcgb\" (UniqueName: \"kubernetes.io/projected/4843dc8e-84df-44ba-a1f8-8c626bac3df8-kube-api-access-7rcgb\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.646262 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-combined-ca-bundle\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.654264 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-combined-ca-bundle\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.656364 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-db-sync-config-data\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.656457 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-config-data\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.670658 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rcgb\" (UniqueName: \"kubernetes.io/projected/4843dc8e-84df-44ba-a1f8-8c626bac3df8-kube-api-access-7rcgb\") pod \"glance-db-sync-mwwvl\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:11 crc kubenswrapper[4780]: I1205 08:18:11.699971 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:12 crc kubenswrapper[4780]: I1205 08:18:12.240362 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mwwvl"] Dec 05 08:18:13 crc kubenswrapper[4780]: I1205 08:18:13.087195 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwwvl" event={"ID":"4843dc8e-84df-44ba-a1f8-8c626bac3df8","Type":"ContainerStarted","Data":"fd0edfa30a42847e9d53430406d43efe0d30959d193d2d84f0ec82305d71b827"} Dec 05 08:18:15 crc kubenswrapper[4780]: I1205 08:18:15.138959 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:18:15 crc kubenswrapper[4780]: E1205 08:18:15.139553 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:18:25 crc kubenswrapper[4780]: I1205 08:18:25.664789 4780 scope.go:117] "RemoveContainer" containerID="7896ab9304511dcde7af8f31b0ba82270faa37f22f2b2b14ac57832098562edd" Dec 05 08:18:28 crc kubenswrapper[4780]: I1205 08:18:28.138739 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:18:28 crc kubenswrapper[4780]: E1205 08:18:28.139523 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:18:28 crc kubenswrapper[4780]: I1205 08:18:28.594229 4780 scope.go:117] "RemoveContainer" containerID="822fec1d5af7ad72030d3fdd8273f0c3273d45643e6bf70ad162b906ebe51fc6" Dec 05 08:18:28 crc kubenswrapper[4780]: I1205 08:18:28.639658 4780 scope.go:117] "RemoveContainer" containerID="71c19e5abf6001170aedf68a64a856431e1dbfdb327c244ab94ad3cc223ad705" Dec 05 08:18:30 crc kubenswrapper[4780]: I1205 08:18:30.245081 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwwvl" event={"ID":"4843dc8e-84df-44ba-a1f8-8c626bac3df8","Type":"ContainerStarted","Data":"bbfaf9795098c6706e5231432baf703fe35f3467c53d6aa2c0a1f7cea4f4a2e1"} Dec 05 08:18:30 crc kubenswrapper[4780]: I1205 08:18:30.267528 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mwwvl" podStartSLOduration=2.831461546 podStartE2EDuration="19.267503718s" podCreationTimestamp="2025-12-05 08:18:11 +0000 UTC" firstStartedPulling="2025-12-05 08:18:12.245287433 +0000 UTC m=+5526.314803765" lastFinishedPulling="2025-12-05 08:18:28.681329605 +0000 UTC m=+5542.750845937" observedRunningTime="2025-12-05 08:18:30.265638967 +0000 UTC m=+5544.335155309" watchObservedRunningTime="2025-12-05 08:18:30.267503718 +0000 UTC m=+5544.337020060" Dec 05 08:18:33 crc kubenswrapper[4780]: I1205 08:18:33.279161 4780 generic.go:334] "Generic (PLEG): container finished" podID="4843dc8e-84df-44ba-a1f8-8c626bac3df8" containerID="bbfaf9795098c6706e5231432baf703fe35f3467c53d6aa2c0a1f7cea4f4a2e1" exitCode=0 Dec 05 08:18:33 crc kubenswrapper[4780]: I1205 08:18:33.279500 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwwvl" event={"ID":"4843dc8e-84df-44ba-a1f8-8c626bac3df8","Type":"ContainerDied","Data":"bbfaf9795098c6706e5231432baf703fe35f3467c53d6aa2c0a1f7cea4f4a2e1"} Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.673919 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.833167 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rcgb\" (UniqueName: \"kubernetes.io/projected/4843dc8e-84df-44ba-a1f8-8c626bac3df8-kube-api-access-7rcgb\") pod \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.833262 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-db-sync-config-data\") pod \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.833330 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-combined-ca-bundle\") pod \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.833470 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-config-data\") pod \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\" (UID: \"4843dc8e-84df-44ba-a1f8-8c626bac3df8\") " Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.838626 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4843dc8e-84df-44ba-a1f8-8c626bac3df8" (UID: "4843dc8e-84df-44ba-a1f8-8c626bac3df8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.842434 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4843dc8e-84df-44ba-a1f8-8c626bac3df8-kube-api-access-7rcgb" (OuterVolumeSpecName: "kube-api-access-7rcgb") pod "4843dc8e-84df-44ba-a1f8-8c626bac3df8" (UID: "4843dc8e-84df-44ba-a1f8-8c626bac3df8"). InnerVolumeSpecName "kube-api-access-7rcgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.858716 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4843dc8e-84df-44ba-a1f8-8c626bac3df8" (UID: "4843dc8e-84df-44ba-a1f8-8c626bac3df8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.879180 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-config-data" (OuterVolumeSpecName: "config-data") pod "4843dc8e-84df-44ba-a1f8-8c626bac3df8" (UID: "4843dc8e-84df-44ba-a1f8-8c626bac3df8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.935391 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rcgb\" (UniqueName: \"kubernetes.io/projected/4843dc8e-84df-44ba-a1f8-8c626bac3df8-kube-api-access-7rcgb\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.935423 4780 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.935435 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:34 crc kubenswrapper[4780]: I1205 08:18:34.935447 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4843dc8e-84df-44ba-a1f8-8c626bac3df8-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.295685 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mwwvl" event={"ID":"4843dc8e-84df-44ba-a1f8-8c626bac3df8","Type":"ContainerDied","Data":"fd0edfa30a42847e9d53430406d43efe0d30959d193d2d84f0ec82305d71b827"} Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.295721 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0edfa30a42847e9d53430406d43efe0d30959d193d2d84f0ec82305d71b827" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.295828 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mwwvl" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.672328 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67984c8945-l4jmd"] Dec 05 08:18:35 crc kubenswrapper[4780]: E1205 08:18:35.672812 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4843dc8e-84df-44ba-a1f8-8c626bac3df8" containerName="glance-db-sync" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.672828 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4843dc8e-84df-44ba-a1f8-8c626bac3df8" containerName="glance-db-sync" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.673064 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4843dc8e-84df-44ba-a1f8-8c626bac3df8" containerName="glance-db-sync" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.673984 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.693163 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.695221 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.703787 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.704166 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f4bjw" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.704370 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.708775 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67984c8945-l4jmd"] Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.720753 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.827019 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.829600 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.831607 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.838728 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854374 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-nb\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854453 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2v5v\" (UniqueName: \"kubernetes.io/projected/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-kube-api-access-t2v5v\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854485 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-dns-svc\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854518 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-config\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854545 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-sb\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854579 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854611 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75hx\" (UniqueName: \"kubernetes.io/projected/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-kube-api-access-k75hx\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854642 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-config-data\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854712 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-scripts\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854755 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.854802 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-logs\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.956960 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-scripts\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957531 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957559 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957589 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-logs\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957625 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957784 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-nb\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957838 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2v5v\" (UniqueName: \"kubernetes.io/projected/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-kube-api-access-t2v5v\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957870 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-dns-svc\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957917 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-config\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957938 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957954 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.957977 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-sb\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.958010 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.958035 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k75hx\" (UniqueName: \"kubernetes.io/projected/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-kube-api-access-k75hx\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.958049 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qqtv\" (UniqueName: \"kubernetes.io/projected/fdfb62aa-3467-4b3f-9523-245faa6631bc-kube-api-access-4qqtv\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.958081 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-config-data\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.958106 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.958152 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.958199 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-logs\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.959028 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-nb\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.959837 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-sb\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.960185 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-dns-svc\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.960808 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-config\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.963256 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-scripts\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.964455 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.965982 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-config-data\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.978457 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2v5v\" (UniqueName: \"kubernetes.io/projected/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-kube-api-access-t2v5v\") pod \"glance-default-external-api-0\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.978494 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k75hx\" (UniqueName: \"kubernetes.io/projected/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-kube-api-access-k75hx\") pod \"dnsmasq-dns-67984c8945-l4jmd\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:35 crc kubenswrapper[4780]: I1205 08:18:35.997138 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.031098 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.063546 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.064085 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.064219 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.064470 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.064590 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.065136 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qqtv\" (UniqueName: \"kubernetes.io/projected/fdfb62aa-3467-4b3f-9523-245faa6631bc-kube-api-access-4qqtv\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.064667 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.064629 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.071274 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.071553 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.072203 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.086384 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qqtv\" (UniqueName: \"kubernetes.io/projected/fdfb62aa-3467-4b3f-9523-245faa6631bc-kube-api-access-4qqtv\") pod \"glance-default-internal-api-0\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:36 crc kubenswrapper[4780]: I1205 08:18:36.163292 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:37 crc kubenswrapper[4780]: I1205 08:18:36.643050 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67984c8945-l4jmd"] Dec 05 08:18:37 crc kubenswrapper[4780]: I1205 08:18:36.840182 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:18:37 crc kubenswrapper[4780]: I1205 08:18:36.884465 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:18:37 crc kubenswrapper[4780]: I1205 08:18:36.982516 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:18:37 crc kubenswrapper[4780]: W1205 08:18:36.988362 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84e7729f_5cb8_40a5_8aa8_dbcd7033e85c.slice/crio-d1a57c8fa40c0556236282e3a5244d0e5061a973e6f87eec8ef0d306188f0027 WatchSource:0}: Error finding container d1a57c8fa40c0556236282e3a5244d0e5061a973e6f87eec8ef0d306188f0027: Status 404 returned error can't find the container with id d1a57c8fa40c0556236282e3a5244d0e5061a973e6f87eec8ef0d306188f0027 Dec 05 08:18:37 crc kubenswrapper[4780]: I1205 08:18:37.338477 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdfb62aa-3467-4b3f-9523-245faa6631bc","Type":"ContainerStarted","Data":"b1edbd6cb61301c0c221ec750fd341e17ad5f7ba350249a22c4315a3fc68d87d"} Dec 05 08:18:37 crc kubenswrapper[4780]: I1205 08:18:37.343162 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c","Type":"ContainerStarted","Data":"d1a57c8fa40c0556236282e3a5244d0e5061a973e6f87eec8ef0d306188f0027"} Dec 05 08:18:37 crc kubenswrapper[4780]: I1205 08:18:37.345671 4780 generic.go:334] "Generic (PLEG): container finished" podID="ae0b581c-b4ba-493a-a0b0-b309c0e18c46" containerID="d6d25aac74204f4a91291434fdc4f513ab5476d6a2d123880697e20668c3290a" exitCode=0 Dec 05 08:18:37 crc kubenswrapper[4780]: I1205 08:18:37.345712 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" event={"ID":"ae0b581c-b4ba-493a-a0b0-b309c0e18c46","Type":"ContainerDied","Data":"d6d25aac74204f4a91291434fdc4f513ab5476d6a2d123880697e20668c3290a"} Dec 05 08:18:37 crc kubenswrapper[4780]: I1205 08:18:37.345771 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" event={"ID":"ae0b581c-b4ba-493a-a0b0-b309c0e18c46","Type":"ContainerStarted","Data":"6d99bd4f9fda67b0a31e646d2d2db83672e221c98ef2e620bc1a95cafc26f27e"} Dec 05 08:18:37 crc kubenswrapper[4780]: I1205 08:18:37.886138 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.355842 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c","Type":"ContainerStarted","Data":"134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f"} Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.356296 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c","Type":"ContainerStarted","Data":"62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d"} Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.356404 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" containerName="glance-log" containerID="cri-o://62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d" gracePeriod=30 Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.356752 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" containerName="glance-httpd" containerID="cri-o://134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f" gracePeriod=30 Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.359479 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" event={"ID":"ae0b581c-b4ba-493a-a0b0-b309c0e18c46","Type":"ContainerStarted","Data":"bf4b59ac7e3470e0773e802e944c8190cd8f71d1fc2543bc1ed726b473ced0ec"} Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.360484 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.363167 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdfb62aa-3467-4b3f-9523-245faa6631bc","Type":"ContainerStarted","Data":"fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a"} Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.363215 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdfb62aa-3467-4b3f-9523-245faa6631bc","Type":"ContainerStarted","Data":"07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b"} Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.363271 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fdfb62aa-3467-4b3f-9523-245faa6631bc" containerName="glance-log" containerID="cri-o://07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b" gracePeriod=30 Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.363383 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fdfb62aa-3467-4b3f-9523-245faa6631bc" containerName="glance-httpd" containerID="cri-o://fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a" gracePeriod=30 Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.383344 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.383313628 podStartE2EDuration="3.383313628s" podCreationTimestamp="2025-12-05 08:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:18:38.379035142 +0000 UTC m=+5552.448551494" watchObservedRunningTime="2025-12-05 08:18:38.383313628 +0000 UTC m=+5552.452829990" Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.415130 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.415097953 podStartE2EDuration="3.415097953s" podCreationTimestamp="2025-12-05 08:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:18:38.402617404 +0000 UTC m=+5552.472133746" watchObservedRunningTime="2025-12-05 08:18:38.415097953 +0000 UTC m=+5552.484614285" Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.431575 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" podStartSLOduration=3.4315565709999998 podStartE2EDuration="3.431556571s" podCreationTimestamp="2025-12-05 08:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:18:38.428628201 +0000 UTC m=+5552.498144533" watchObservedRunningTime="2025-12-05 08:18:38.431556571 +0000 UTC m=+5552.501072903" Dec 05 08:18:38 crc kubenswrapper[4780]: I1205 08:18:38.968385 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.135999 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-logs\") pod \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.136053 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2v5v\" (UniqueName: \"kubernetes.io/projected/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-kube-api-access-t2v5v\") pod \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.136118 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-combined-ca-bundle\") pod \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.136228 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-config-data\") pod \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.136299 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-httpd-run\") pod \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.136398 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-scripts\") pod \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\" (UID: \"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.136828 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" (UID: "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.136932 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-logs" (OuterVolumeSpecName: "logs") pod "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" (UID: "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.142999 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-scripts" (OuterVolumeSpecName: "scripts") pod "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" (UID: "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.144583 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-kube-api-access-t2v5v" (OuterVolumeSpecName: "kube-api-access-t2v5v") pod "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" (UID: "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c"). InnerVolumeSpecName "kube-api-access-t2v5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.165805 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" (UID: "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.209473 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-config-data" (OuterVolumeSpecName: "config-data") pod "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" (UID: "84e7729f-5cb8-40a5-8aa8-dbcd7033e85c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.239124 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.239405 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.239415 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.239423 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.239432 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2v5v\" (UniqueName: \"kubernetes.io/projected/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-kube-api-access-t2v5v\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.239485 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.262239 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.374634 4780 generic.go:334] "Generic (PLEG): container finished" podID="fdfb62aa-3467-4b3f-9523-245faa6631bc" containerID="fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a" exitCode=143 Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.374674 4780 generic.go:334] "Generic (PLEG): container finished" podID="fdfb62aa-3467-4b3f-9523-245faa6631bc" containerID="07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b" exitCode=143 Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.374713 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.374736 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdfb62aa-3467-4b3f-9523-245faa6631bc","Type":"ContainerDied","Data":"fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a"} Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.374779 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdfb62aa-3467-4b3f-9523-245faa6631bc","Type":"ContainerDied","Data":"07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b"} Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.374793 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdfb62aa-3467-4b3f-9523-245faa6631bc","Type":"ContainerDied","Data":"b1edbd6cb61301c0c221ec750fd341e17ad5f7ba350249a22c4315a3fc68d87d"} Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.374823 4780 scope.go:117] "RemoveContainer" containerID="fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.376873 4780 generic.go:334] "Generic (PLEG): container finished" podID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" containerID="134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f" exitCode=143 Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.376921 4780 generic.go:334] "Generic (PLEG): container finished" podID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" containerID="62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d" exitCode=143 Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.377518 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.380033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c","Type":"ContainerDied","Data":"134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f"} Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.380108 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c","Type":"ContainerDied","Data":"62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d"} Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.380128 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84e7729f-5cb8-40a5-8aa8-dbcd7033e85c","Type":"ContainerDied","Data":"d1a57c8fa40c0556236282e3a5244d0e5061a973e6f87eec8ef0d306188f0027"} Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.420245 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.423974 4780 scope.go:117] "RemoveContainer" containerID="07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.432124 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.444380 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-combined-ca-bundle\") pod \"fdfb62aa-3467-4b3f-9523-245faa6631bc\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.444525 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-httpd-run\") pod \"fdfb62aa-3467-4b3f-9523-245faa6631bc\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.444593 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-config-data\") pod \"fdfb62aa-3467-4b3f-9523-245faa6631bc\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.444815 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qqtv\" (UniqueName: \"kubernetes.io/projected/fdfb62aa-3467-4b3f-9523-245faa6631bc-kube-api-access-4qqtv\") pod \"fdfb62aa-3467-4b3f-9523-245faa6631bc\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.444862 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-scripts\") pod \"fdfb62aa-3467-4b3f-9523-245faa6631bc\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.444926 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-logs\") pod \"fdfb62aa-3467-4b3f-9523-245faa6631bc\" (UID: \"fdfb62aa-3467-4b3f-9523-245faa6631bc\") " Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.446083 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-logs" (OuterVolumeSpecName: "logs") pod "fdfb62aa-3467-4b3f-9523-245faa6631bc" (UID: "fdfb62aa-3467-4b3f-9523-245faa6631bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.446146 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fdfb62aa-3467-4b3f-9523-245faa6631bc" (UID: "fdfb62aa-3467-4b3f-9523-245faa6631bc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.453036 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfb62aa-3467-4b3f-9523-245faa6631bc-kube-api-access-4qqtv" (OuterVolumeSpecName: "kube-api-access-4qqtv") pod "fdfb62aa-3467-4b3f-9523-245faa6631bc" (UID: "fdfb62aa-3467-4b3f-9523-245faa6631bc"). InnerVolumeSpecName "kube-api-access-4qqtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.455490 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-scripts" (OuterVolumeSpecName: "scripts") pod "fdfb62aa-3467-4b3f-9523-245faa6631bc" (UID: "fdfb62aa-3467-4b3f-9523-245faa6631bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.461265 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:18:39 crc kubenswrapper[4780]: E1205 08:18:39.461803 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" containerName="glance-httpd" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.461829 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" containerName="glance-httpd" Dec 05 08:18:39 crc kubenswrapper[4780]: E1205 08:18:39.461853 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfb62aa-3467-4b3f-9523-245faa6631bc" containerName="glance-log" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.461860 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfb62aa-3467-4b3f-9523-245faa6631bc" containerName="glance-log" Dec 05 08:18:39 crc kubenswrapper[4780]: E1205 08:18:39.461912 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" containerName="glance-log" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.461921 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" containerName="glance-log" Dec 05 08:18:39 crc kubenswrapper[4780]: E1205 08:18:39.461931 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfb62aa-3467-4b3f-9523-245faa6631bc" containerName="glance-httpd" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.461938 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfb62aa-3467-4b3f-9523-245faa6631bc" containerName="glance-httpd" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.463449 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfb62aa-3467-4b3f-9523-245faa6631bc" containerName="glance-httpd" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.463488 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" containerName="glance-log" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.463502 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfb62aa-3467-4b3f-9523-245faa6631bc" containerName="glance-log" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.463514 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" containerName="glance-httpd" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.464942 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.467956 4780 scope.go:117] "RemoveContainer" containerID="fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.468319 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 08:18:39 crc kubenswrapper[4780]: E1205 08:18:39.469970 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a\": container with ID starting with fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a not found: ID does not exist" containerID="fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.470023 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a"} err="failed to get container status \"fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a\": rpc error: code = NotFound desc = could not find container \"fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a\": container with ID starting with fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a not found: ID does not exist" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.470056 4780 scope.go:117] "RemoveContainer" containerID="07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.470280 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 08:18:39 crc kubenswrapper[4780]: E1205 08:18:39.470550 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b\": container with ID starting with 07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b not found: ID does not exist" containerID="07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.470630 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b"} err="failed to get container status \"07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b\": rpc error: code = NotFound desc = could not find container \"07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b\": container with ID starting with 07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b not found: ID does not exist" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.470788 4780 scope.go:117] "RemoveContainer" containerID="fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.471621 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.471992 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a"} err="failed to get container status \"fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a\": rpc error: code = NotFound desc = could not find container \"fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a\": container with ID starting with fa68a931b788a263870ccb605cca02e90a6e2436431bb9f3d88a54682a01581a not found: ID does not exist" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.472084 4780 scope.go:117] "RemoveContainer" containerID="07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.473296 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b"} err="failed to get container status \"07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b\": rpc error: code = NotFound desc = could not find container \"07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b\": container with ID starting with 07dea033ce6a1659f2746e1d40b7c1c144fb1e6e29e476d68baf57a506b2125b not found: ID does not exist" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.473330 4780 scope.go:117] "RemoveContainer" containerID="134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.499752 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdfb62aa-3467-4b3f-9523-245faa6631bc" (UID: "fdfb62aa-3467-4b3f-9523-245faa6631bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.502430 4780 scope.go:117] "RemoveContainer" containerID="62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.532072 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-config-data" (OuterVolumeSpecName: "config-data") pod "fdfb62aa-3467-4b3f-9523-245faa6631bc" (UID: "fdfb62aa-3467-4b3f-9523-245faa6631bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.537383 4780 scope.go:117] "RemoveContainer" containerID="134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f" Dec 05 08:18:39 crc kubenswrapper[4780]: E1205 08:18:39.537796 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f\": container with ID starting with 134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f not found: ID does not exist" containerID="134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.537830 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f"} err="failed to get container status \"134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f\": rpc error: code = NotFound desc = could not find container \"134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f\": container with ID starting with 134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f not found: ID does not exist" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.537856 4780 scope.go:117] "RemoveContainer" containerID="62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d" Dec 05 08:18:39 crc kubenswrapper[4780]: E1205 08:18:39.538310 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d\": container with ID starting with 62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d not found: ID does not exist" containerID="62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.538340 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d"} err="failed to get container status \"62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d\": rpc error: code = NotFound desc = could not find container \"62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d\": container with ID starting with 62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d not found: ID does not exist" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.538358 4780 scope.go:117] "RemoveContainer" containerID="134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.538544 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f"} err="failed to get container status \"134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f\": rpc error: code = NotFound desc = could not find container \"134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f\": container with ID starting with 134e96e36fe5815235ff648349fd18ad0f9857ac55c9799a38ff1430db9dff6f not found: ID does not exist" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.538573 4780 scope.go:117] "RemoveContainer" containerID="62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.538923 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d"} err="failed to get container status \"62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d\": rpc error: code = NotFound desc = could not find container \"62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d\": container with ID starting with 62d02f82f342e98eca399051cb4a782155887584d2a5e902b7e52af299c3cb4d not found: ID does not exist" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.549188 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.549233 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.549249 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.549261 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qqtv\" (UniqueName: \"kubernetes.io/projected/fdfb62aa-3467-4b3f-9523-245faa6631bc-kube-api-access-4qqtv\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.549273 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdfb62aa-3467-4b3f-9523-245faa6631bc-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.549284 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdfb62aa-3467-4b3f-9523-245faa6631bc-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.651075 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.651360 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.651459 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8tbp\" (UniqueName: \"kubernetes.io/projected/8ce2c4c7-d952-41e3-af8b-7446f9435571-kube-api-access-f8tbp\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.651506 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.651540 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.651967 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.652119 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-logs\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.716015 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.748183 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.753647 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.753703 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-logs\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.753760 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.753792 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.753811 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8tbp\" (UniqueName: \"kubernetes.io/projected/8ce2c4c7-d952-41e3-af8b-7446f9435571-kube-api-access-f8tbp\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.753829 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.753844 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.754320 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.755275 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-logs\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.758678 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.759367 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.764495 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.764597 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.776041 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.777981 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.779083 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8tbp\" (UniqueName: \"kubernetes.io/projected/8ce2c4c7-d952-41e3-af8b-7446f9435571-kube-api-access-f8tbp\") pod \"glance-default-external-api-0\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.779605 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.780384 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.784834 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.803283 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.959064 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.959539 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9s54\" (UniqueName: \"kubernetes.io/projected/e6d986d6-1f60-4d57-aad6-82764a80ce9c-kube-api-access-b9s54\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.959577 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-logs\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.959613 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.959650 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.959701 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:39 crc kubenswrapper[4780]: I1205 08:18:39.959740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.061891 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.061993 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.062592 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.062656 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9s54\" (UniqueName: \"kubernetes.io/projected/e6d986d6-1f60-4d57-aad6-82764a80ce9c-kube-api-access-b9s54\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.062686 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-logs\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.062715 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.062736 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.063512 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.063542 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-logs\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.068507 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.068928 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.075225 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.076604 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.093062 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9s54\" (UniqueName: \"kubernetes.io/projected/e6d986d6-1f60-4d57-aad6-82764a80ce9c-kube-api-access-b9s54\") pod \"glance-default-internal-api-0\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.111833 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.139963 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:18:40 crc kubenswrapper[4780]: E1205 08:18:40.140256 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.150804 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e7729f-5cb8-40a5-8aa8-dbcd7033e85c" path="/var/lib/kubelet/pods/84e7729f-5cb8-40a5-8aa8-dbcd7033e85c/volumes" Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.151843 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfb62aa-3467-4b3f-9523-245faa6631bc" path="/var/lib/kubelet/pods/fdfb62aa-3467-4b3f-9523-245faa6631bc/volumes" Dec 05 08:18:40 crc kubenswrapper[4780]: W1205 08:18:40.362555 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ce2c4c7_d952_41e3_af8b_7446f9435571.slice/crio-b148ed82d64ceeb58ce069e9fbdcf184251986429ae3cd3077ab102d1f3685bc WatchSource:0}: Error finding container b148ed82d64ceeb58ce069e9fbdcf184251986429ae3cd3077ab102d1f3685bc: Status 404 returned error can't find the container with id b148ed82d64ceeb58ce069e9fbdcf184251986429ae3cd3077ab102d1f3685bc Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.368442 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.420157 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ce2c4c7-d952-41e3-af8b-7446f9435571","Type":"ContainerStarted","Data":"b148ed82d64ceeb58ce069e9fbdcf184251986429ae3cd3077ab102d1f3685bc"} Dec 05 08:18:40 crc kubenswrapper[4780]: I1205 08:18:40.641606 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:18:41 crc kubenswrapper[4780]: I1205 08:18:41.441210 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ce2c4c7-d952-41e3-af8b-7446f9435571","Type":"ContainerStarted","Data":"43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1"} Dec 05 08:18:41 crc kubenswrapper[4780]: I1205 08:18:41.443044 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6d986d6-1f60-4d57-aad6-82764a80ce9c","Type":"ContainerStarted","Data":"0f16f42d27f57a64bf0c7fd113418e5c0b643233359e1f0a997ea035d28b6e5f"} Dec 05 08:18:41 crc kubenswrapper[4780]: I1205 08:18:41.443093 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6d986d6-1f60-4d57-aad6-82764a80ce9c","Type":"ContainerStarted","Data":"c6bd56efee2b63bd2b31ab7fb72c348a7a97395f52121a6ff174e2a62c57feb4"} Dec 05 08:18:42 crc kubenswrapper[4780]: I1205 08:18:42.452963 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ce2c4c7-d952-41e3-af8b-7446f9435571","Type":"ContainerStarted","Data":"38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1"} Dec 05 08:18:42 crc kubenswrapper[4780]: I1205 08:18:42.455118 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6d986d6-1f60-4d57-aad6-82764a80ce9c","Type":"ContainerStarted","Data":"7eb585f00e9cf3bca639d42415cfb25ab60b1feb49f5cc0f752f9a4756d0995c"} Dec 05 08:18:42 crc kubenswrapper[4780]: I1205 08:18:42.476355 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.476334095 podStartE2EDuration="3.476334095s" podCreationTimestamp="2025-12-05 08:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:18:42.469306834 +0000 UTC m=+5556.538823186" watchObservedRunningTime="2025-12-05 08:18:42.476334095 +0000 UTC m=+5556.545850427" Dec 05 08:18:42 crc kubenswrapper[4780]: I1205 08:18:42.501774 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.501752206 podStartE2EDuration="3.501752206s" podCreationTimestamp="2025-12-05 08:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:18:42.498573829 +0000 UTC m=+5556.568090151" watchObservedRunningTime="2025-12-05 08:18:42.501752206 +0000 UTC m=+5556.571268548" Dec 05 08:18:45 crc kubenswrapper[4780]: I1205 08:18:45.998727 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.086156 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d8b9ddbf7-4wtjp"] Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.086406 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" podUID="96a512b2-ee45-4b1e-bb25-53f11184d533" containerName="dnsmasq-dns" containerID="cri-o://40a48118eaeb20ea32c39b619db2fb7c593b0621d9bdd9cee19d65f38268fa3a" gracePeriod=10 Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.500099 4780 generic.go:334] "Generic (PLEG): container finished" podID="96a512b2-ee45-4b1e-bb25-53f11184d533" containerID="40a48118eaeb20ea32c39b619db2fb7c593b0621d9bdd9cee19d65f38268fa3a" exitCode=0 Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.500142 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" event={"ID":"96a512b2-ee45-4b1e-bb25-53f11184d533","Type":"ContainerDied","Data":"40a48118eaeb20ea32c39b619db2fb7c593b0621d9bdd9cee19d65f38268fa3a"} Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.500515 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" event={"ID":"96a512b2-ee45-4b1e-bb25-53f11184d533","Type":"ContainerDied","Data":"7b5287ddadb229963dc99afb6921d89fc32214201df6db9eeeec05014cc40d02"} Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.500539 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b5287ddadb229963dc99afb6921d89fc32214201df6db9eeeec05014cc40d02" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.562657 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.701667 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-config\") pod \"96a512b2-ee45-4b1e-bb25-53f11184d533\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.701771 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-nb\") pod \"96a512b2-ee45-4b1e-bb25-53f11184d533\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.701858 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-sb\") pod \"96a512b2-ee45-4b1e-bb25-53f11184d533\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.701958 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-dns-svc\") pod \"96a512b2-ee45-4b1e-bb25-53f11184d533\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.702046 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnndp\" (UniqueName: \"kubernetes.io/projected/96a512b2-ee45-4b1e-bb25-53f11184d533-kube-api-access-jnndp\") pod \"96a512b2-ee45-4b1e-bb25-53f11184d533\" (UID: \"96a512b2-ee45-4b1e-bb25-53f11184d533\") " Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.709076 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a512b2-ee45-4b1e-bb25-53f11184d533-kube-api-access-jnndp" (OuterVolumeSpecName: "kube-api-access-jnndp") pod "96a512b2-ee45-4b1e-bb25-53f11184d533" (UID: "96a512b2-ee45-4b1e-bb25-53f11184d533"). InnerVolumeSpecName "kube-api-access-jnndp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.747797 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "96a512b2-ee45-4b1e-bb25-53f11184d533" (UID: "96a512b2-ee45-4b1e-bb25-53f11184d533"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.748634 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "96a512b2-ee45-4b1e-bb25-53f11184d533" (UID: "96a512b2-ee45-4b1e-bb25-53f11184d533"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.750061 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-config" (OuterVolumeSpecName: "config") pod "96a512b2-ee45-4b1e-bb25-53f11184d533" (UID: "96a512b2-ee45-4b1e-bb25-53f11184d533"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.755354 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "96a512b2-ee45-4b1e-bb25-53f11184d533" (UID: "96a512b2-ee45-4b1e-bb25-53f11184d533"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.805005 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.805043 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.805056 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.805065 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96a512b2-ee45-4b1e-bb25-53f11184d533-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:46 crc kubenswrapper[4780]: I1205 08:18:46.805076 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnndp\" (UniqueName: \"kubernetes.io/projected/96a512b2-ee45-4b1e-bb25-53f11184d533-kube-api-access-jnndp\") on node \"crc\" DevicePath \"\"" Dec 05 08:18:47 crc kubenswrapper[4780]: I1205 08:18:47.509061 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8b9ddbf7-4wtjp" Dec 05 08:18:47 crc kubenswrapper[4780]: I1205 08:18:47.547345 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d8b9ddbf7-4wtjp"] Dec 05 08:18:47 crc kubenswrapper[4780]: I1205 08:18:47.554684 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d8b9ddbf7-4wtjp"] Dec 05 08:18:48 crc kubenswrapper[4780]: I1205 08:18:48.149301 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a512b2-ee45-4b1e-bb25-53f11184d533" path="/var/lib/kubelet/pods/96a512b2-ee45-4b1e-bb25-53f11184d533/volumes" Dec 05 08:18:49 crc kubenswrapper[4780]: I1205 08:18:49.804549 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 08:18:49 crc kubenswrapper[4780]: I1205 08:18:49.804914 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 08:18:49 crc kubenswrapper[4780]: I1205 08:18:49.846426 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 08:18:49 crc kubenswrapper[4780]: I1205 08:18:49.852832 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 08:18:50 crc kubenswrapper[4780]: I1205 08:18:50.112963 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:50 crc kubenswrapper[4780]: I1205 08:18:50.113019 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:50 crc kubenswrapper[4780]: I1205 08:18:50.148134 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:50 crc kubenswrapper[4780]: I1205 08:18:50.152092 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:50 crc kubenswrapper[4780]: I1205 08:18:50.536173 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:50 crc kubenswrapper[4780]: I1205 08:18:50.536570 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 08:18:50 crc kubenswrapper[4780]: I1205 08:18:50.536667 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 08:18:50 crc kubenswrapper[4780]: I1205 08:18:50.536763 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:51 crc kubenswrapper[4780]: I1205 08:18:51.138827 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:18:51 crc kubenswrapper[4780]: E1205 08:18:51.139090 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:18:52 crc kubenswrapper[4780]: I1205 08:18:52.465438 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 08:18:52 crc kubenswrapper[4780]: I1205 08:18:52.496588 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:52 crc kubenswrapper[4780]: I1205 08:18:52.552294 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 08:18:52 crc kubenswrapper[4780]: I1205 08:18:52.553254 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 08:18:52 crc kubenswrapper[4780]: I1205 08:18:52.562432 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 08:18:52 crc kubenswrapper[4780]: I1205 08:18:52.590957 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.510515 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-tz48s"] Dec 05 08:18:58 crc kubenswrapper[4780]: E1205 08:18:58.512279 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a512b2-ee45-4b1e-bb25-53f11184d533" containerName="dnsmasq-dns" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.512393 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a512b2-ee45-4b1e-bb25-53f11184d533" containerName="dnsmasq-dns" Dec 05 08:18:58 crc kubenswrapper[4780]: E1205 08:18:58.512491 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a512b2-ee45-4b1e-bb25-53f11184d533" containerName="init" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.512559 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a512b2-ee45-4b1e-bb25-53f11184d533" containerName="init" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.512841 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a512b2-ee45-4b1e-bb25-53f11184d533" containerName="dnsmasq-dns" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.513605 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tz48s" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.520437 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tz48s"] Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.625567 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f270-account-create-update-tn5q9"] Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.626921 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f270-account-create-update-tn5q9" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.629727 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.649460 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f270-account-create-update-tn5q9"] Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.664278 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff4wr\" (UniqueName: \"kubernetes.io/projected/67049d16-1624-44fc-9a39-0a9897640c19-kube-api-access-ff4wr\") pod \"placement-db-create-tz48s\" (UID: \"67049d16-1624-44fc-9a39-0a9897640c19\") " pod="openstack/placement-db-create-tz48s" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.664678 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67049d16-1624-44fc-9a39-0a9897640c19-operator-scripts\") pod \"placement-db-create-tz48s\" (UID: \"67049d16-1624-44fc-9a39-0a9897640c19\") " pod="openstack/placement-db-create-tz48s" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.766527 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff4wr\" (UniqueName: \"kubernetes.io/projected/67049d16-1624-44fc-9a39-0a9897640c19-kube-api-access-ff4wr\") pod \"placement-db-create-tz48s\" (UID: \"67049d16-1624-44fc-9a39-0a9897640c19\") " pod="openstack/placement-db-create-tz48s" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.766617 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ksx\" (UniqueName: \"kubernetes.io/projected/1063dd08-b420-4608-8d65-168c51c6ec7a-kube-api-access-q4ksx\") pod \"placement-f270-account-create-update-tn5q9\" (UID: \"1063dd08-b420-4608-8d65-168c51c6ec7a\") " pod="openstack/placement-f270-account-create-update-tn5q9" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.766660 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1063dd08-b420-4608-8d65-168c51c6ec7a-operator-scripts\") pod \"placement-f270-account-create-update-tn5q9\" (UID: \"1063dd08-b420-4608-8d65-168c51c6ec7a\") " pod="openstack/placement-f270-account-create-update-tn5q9" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.766720 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67049d16-1624-44fc-9a39-0a9897640c19-operator-scripts\") pod \"placement-db-create-tz48s\" (UID: \"67049d16-1624-44fc-9a39-0a9897640c19\") " pod="openstack/placement-db-create-tz48s" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.767626 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67049d16-1624-44fc-9a39-0a9897640c19-operator-scripts\") pod \"placement-db-create-tz48s\" (UID: \"67049d16-1624-44fc-9a39-0a9897640c19\") " pod="openstack/placement-db-create-tz48s" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.790419 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff4wr\" (UniqueName: \"kubernetes.io/projected/67049d16-1624-44fc-9a39-0a9897640c19-kube-api-access-ff4wr\") pod \"placement-db-create-tz48s\" (UID: \"67049d16-1624-44fc-9a39-0a9897640c19\") " pod="openstack/placement-db-create-tz48s" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.868386 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4ksx\" (UniqueName: \"kubernetes.io/projected/1063dd08-b420-4608-8d65-168c51c6ec7a-kube-api-access-q4ksx\") pod \"placement-f270-account-create-update-tn5q9\" (UID: \"1063dd08-b420-4608-8d65-168c51c6ec7a\") " pod="openstack/placement-f270-account-create-update-tn5q9" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.868454 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1063dd08-b420-4608-8d65-168c51c6ec7a-operator-scripts\") pod \"placement-f270-account-create-update-tn5q9\" (UID: \"1063dd08-b420-4608-8d65-168c51c6ec7a\") " pod="openstack/placement-f270-account-create-update-tn5q9" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.869286 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1063dd08-b420-4608-8d65-168c51c6ec7a-operator-scripts\") pod \"placement-f270-account-create-update-tn5q9\" (UID: \"1063dd08-b420-4608-8d65-168c51c6ec7a\") " pod="openstack/placement-f270-account-create-update-tn5q9" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.886052 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4ksx\" (UniqueName: \"kubernetes.io/projected/1063dd08-b420-4608-8d65-168c51c6ec7a-kube-api-access-q4ksx\") pod \"placement-f270-account-create-update-tn5q9\" (UID: \"1063dd08-b420-4608-8d65-168c51c6ec7a\") " pod="openstack/placement-f270-account-create-update-tn5q9" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.893508 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tz48s" Dec 05 08:18:58 crc kubenswrapper[4780]: I1205 08:18:58.946781 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f270-account-create-update-tn5q9" Dec 05 08:18:59 crc kubenswrapper[4780]: I1205 08:18:59.362652 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tz48s"] Dec 05 08:18:59 crc kubenswrapper[4780]: I1205 08:18:59.465675 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f270-account-create-update-tn5q9"] Dec 05 08:18:59 crc kubenswrapper[4780]: I1205 08:18:59.610522 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f270-account-create-update-tn5q9" event={"ID":"1063dd08-b420-4608-8d65-168c51c6ec7a","Type":"ContainerStarted","Data":"be7a9d954488b68c37e3a1bf1d93dba1b571a4ec34ae3389759dab6b7b9a91fb"} Dec 05 08:18:59 crc kubenswrapper[4780]: I1205 08:18:59.611790 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tz48s" event={"ID":"67049d16-1624-44fc-9a39-0a9897640c19","Type":"ContainerStarted","Data":"65167a0a77abcefc14f54f7848b3ad99a9995a024464c80d3e541109fab9b90b"} Dec 05 08:18:59 crc kubenswrapper[4780]: I1205 08:18:59.611824 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tz48s" event={"ID":"67049d16-1624-44fc-9a39-0a9897640c19","Type":"ContainerStarted","Data":"d9e5005b76345fd02453ca57ec3367a3068c172bd66123c02d88c007a59d67d0"} Dec 05 08:18:59 crc kubenswrapper[4780]: I1205 08:18:59.631604 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-tz48s" podStartSLOduration=1.631587701 podStartE2EDuration="1.631587701s" podCreationTimestamp="2025-12-05 08:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:18:59.627644543 +0000 UTC m=+5573.697160875" watchObservedRunningTime="2025-12-05 08:18:59.631587701 +0000 UTC m=+5573.701104033" Dec 05 08:19:00 crc kubenswrapper[4780]: I1205 08:19:00.620707 4780 generic.go:334] "Generic (PLEG): container finished" podID="67049d16-1624-44fc-9a39-0a9897640c19" containerID="65167a0a77abcefc14f54f7848b3ad99a9995a024464c80d3e541109fab9b90b" exitCode=0 Dec 05 08:19:00 crc kubenswrapper[4780]: I1205 08:19:00.620808 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tz48s" event={"ID":"67049d16-1624-44fc-9a39-0a9897640c19","Type":"ContainerDied","Data":"65167a0a77abcefc14f54f7848b3ad99a9995a024464c80d3e541109fab9b90b"} Dec 05 08:19:00 crc kubenswrapper[4780]: I1205 08:19:00.623596 4780 generic.go:334] "Generic (PLEG): container finished" podID="1063dd08-b420-4608-8d65-168c51c6ec7a" containerID="49df14526ac169e1b362aca534bddc1754c1b52f30783d2820588dffe01c7f58" exitCode=0 Dec 05 08:19:00 crc kubenswrapper[4780]: I1205 08:19:00.623631 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f270-account-create-update-tn5q9" event={"ID":"1063dd08-b420-4608-8d65-168c51c6ec7a","Type":"ContainerDied","Data":"49df14526ac169e1b362aca534bddc1754c1b52f30783d2820588dffe01c7f58"} Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.027315 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f270-account-create-update-tn5q9" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.038284 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tz48s" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.137302 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4ksx\" (UniqueName: \"kubernetes.io/projected/1063dd08-b420-4608-8d65-168c51c6ec7a-kube-api-access-q4ksx\") pod \"1063dd08-b420-4608-8d65-168c51c6ec7a\" (UID: \"1063dd08-b420-4608-8d65-168c51c6ec7a\") " Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.137375 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1063dd08-b420-4608-8d65-168c51c6ec7a-operator-scripts\") pod \"1063dd08-b420-4608-8d65-168c51c6ec7a\" (UID: \"1063dd08-b420-4608-8d65-168c51c6ec7a\") " Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.138201 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1063dd08-b420-4608-8d65-168c51c6ec7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1063dd08-b420-4608-8d65-168c51c6ec7a" (UID: "1063dd08-b420-4608-8d65-168c51c6ec7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.143317 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1063dd08-b420-4608-8d65-168c51c6ec7a-kube-api-access-q4ksx" (OuterVolumeSpecName: "kube-api-access-q4ksx") pod "1063dd08-b420-4608-8d65-168c51c6ec7a" (UID: "1063dd08-b420-4608-8d65-168c51c6ec7a"). InnerVolumeSpecName "kube-api-access-q4ksx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.239106 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff4wr\" (UniqueName: \"kubernetes.io/projected/67049d16-1624-44fc-9a39-0a9897640c19-kube-api-access-ff4wr\") pod \"67049d16-1624-44fc-9a39-0a9897640c19\" (UID: \"67049d16-1624-44fc-9a39-0a9897640c19\") " Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.239308 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67049d16-1624-44fc-9a39-0a9897640c19-operator-scripts\") pod \"67049d16-1624-44fc-9a39-0a9897640c19\" (UID: \"67049d16-1624-44fc-9a39-0a9897640c19\") " Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.239797 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67049d16-1624-44fc-9a39-0a9897640c19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67049d16-1624-44fc-9a39-0a9897640c19" (UID: "67049d16-1624-44fc-9a39-0a9897640c19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.239824 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4ksx\" (UniqueName: \"kubernetes.io/projected/1063dd08-b420-4608-8d65-168c51c6ec7a-kube-api-access-q4ksx\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.239911 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1063dd08-b420-4608-8d65-168c51c6ec7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.241568 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67049d16-1624-44fc-9a39-0a9897640c19-kube-api-access-ff4wr" (OuterVolumeSpecName: "kube-api-access-ff4wr") pod "67049d16-1624-44fc-9a39-0a9897640c19" (UID: "67049d16-1624-44fc-9a39-0a9897640c19"). InnerVolumeSpecName "kube-api-access-ff4wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.341447 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff4wr\" (UniqueName: \"kubernetes.io/projected/67049d16-1624-44fc-9a39-0a9897640c19-kube-api-access-ff4wr\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.341491 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67049d16-1624-44fc-9a39-0a9897640c19-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.640072 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tz48s" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.640069 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tz48s" event={"ID":"67049d16-1624-44fc-9a39-0a9897640c19","Type":"ContainerDied","Data":"d9e5005b76345fd02453ca57ec3367a3068c172bd66123c02d88c007a59d67d0"} Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.640223 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9e5005b76345fd02453ca57ec3367a3068c172bd66123c02d88c007a59d67d0" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.641572 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f270-account-create-update-tn5q9" event={"ID":"1063dd08-b420-4608-8d65-168c51c6ec7a","Type":"ContainerDied","Data":"be7a9d954488b68c37e3a1bf1d93dba1b571a4ec34ae3389759dab6b7b9a91fb"} Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.641600 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be7a9d954488b68c37e3a1bf1d93dba1b571a4ec34ae3389759dab6b7b9a91fb" Dec 05 08:19:02 crc kubenswrapper[4780]: I1205 08:19:02.641698 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f270-account-create-update-tn5q9" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.884416 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-749f85f4b9-zj89c"] Dec 05 08:19:03 crc kubenswrapper[4780]: E1205 08:19:03.885173 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1063dd08-b420-4608-8d65-168c51c6ec7a" containerName="mariadb-account-create-update" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.885189 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1063dd08-b420-4608-8d65-168c51c6ec7a" containerName="mariadb-account-create-update" Dec 05 08:19:03 crc kubenswrapper[4780]: E1205 08:19:03.885228 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67049d16-1624-44fc-9a39-0a9897640c19" containerName="mariadb-database-create" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.885235 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="67049d16-1624-44fc-9a39-0a9897640c19" containerName="mariadb-database-create" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.885418 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1063dd08-b420-4608-8d65-168c51c6ec7a" containerName="mariadb-account-create-update" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.885435 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="67049d16-1624-44fc-9a39-0a9897640c19" containerName="mariadb-database-create" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.886520 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.907319 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-749f85f4b9-zj89c"] Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.959618 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lxtzh"] Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.961033 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.963923 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-kb5br" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.964172 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.964413 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 08:19:03 crc kubenswrapper[4780]: I1205 08:19:03.972019 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lxtzh"] Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.081298 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-dns-svc\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.081611 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5cnx\" (UniqueName: \"kubernetes.io/projected/23452946-1048-4f09-a637-3f2e3fa9af17-kube-api-access-f5cnx\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.081746 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-scripts\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.081842 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-nb\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.081980 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-sb\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.082080 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90e102e1-771d-483a-bb67-06a66c885bb6-logs\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.082206 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq28r\" (UniqueName: \"kubernetes.io/projected/90e102e1-771d-483a-bb67-06a66c885bb6-kube-api-access-hq28r\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.082304 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-config\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.082370 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-combined-ca-bundle\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.082451 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-config-data\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.184603 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90e102e1-771d-483a-bb67-06a66c885bb6-logs\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.184655 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq28r\" (UniqueName: \"kubernetes.io/projected/90e102e1-771d-483a-bb67-06a66c885bb6-kube-api-access-hq28r\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.184699 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-config\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.184728 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-combined-ca-bundle\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.184771 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-config-data\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.184804 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-dns-svc\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.184829 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5cnx\" (UniqueName: \"kubernetes.io/projected/23452946-1048-4f09-a637-3f2e3fa9af17-kube-api-access-f5cnx\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.184973 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-scripts\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.185015 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-nb\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.185074 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-sb\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.185174 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90e102e1-771d-483a-bb67-06a66c885bb6-logs\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.186314 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-sb\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.186386 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-dns-svc\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.186440 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-config\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.186545 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-nb\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.191651 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-config-data\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.201744 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-combined-ca-bundle\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.202196 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-scripts\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.203965 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq28r\" (UniqueName: \"kubernetes.io/projected/90e102e1-771d-483a-bb67-06a66c885bb6-kube-api-access-hq28r\") pod \"placement-db-sync-lxtzh\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.205940 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5cnx\" (UniqueName: \"kubernetes.io/projected/23452946-1048-4f09-a637-3f2e3fa9af17-kube-api-access-f5cnx\") pod \"dnsmasq-dns-749f85f4b9-zj89c\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.207818 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.293388 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.719279 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-749f85f4b9-zj89c"] Dec 05 08:19:04 crc kubenswrapper[4780]: W1205 08:19:04.749048 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23452946_1048_4f09_a637_3f2e3fa9af17.slice/crio-10d78c785d127dab3c1706fc1fa89434563e0c72ed818b0c2d705ddb83213817 WatchSource:0}: Error finding container 10d78c785d127dab3c1706fc1fa89434563e0c72ed818b0c2d705ddb83213817: Status 404 returned error can't find the container with id 10d78c785d127dab3c1706fc1fa89434563e0c72ed818b0c2d705ddb83213817 Dec 05 08:19:04 crc kubenswrapper[4780]: I1205 08:19:04.869770 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lxtzh"] Dec 05 08:19:04 crc kubenswrapper[4780]: W1205 08:19:04.876754 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90e102e1_771d_483a_bb67_06a66c885bb6.slice/crio-1ccc19b703f225b0cb08fca16878986902d17ea73ed00d24bab999bc50a79a91 WatchSource:0}: Error finding container 1ccc19b703f225b0cb08fca16878986902d17ea73ed00d24bab999bc50a79a91: Status 404 returned error can't find the container with id 1ccc19b703f225b0cb08fca16878986902d17ea73ed00d24bab999bc50a79a91 Dec 05 08:19:05 crc kubenswrapper[4780]: I1205 08:19:05.677207 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lxtzh" event={"ID":"90e102e1-771d-483a-bb67-06a66c885bb6","Type":"ContainerStarted","Data":"1ccc19b703f225b0cb08fca16878986902d17ea73ed00d24bab999bc50a79a91"} Dec 05 08:19:05 crc kubenswrapper[4780]: I1205 08:19:05.681293 4780 generic.go:334] "Generic (PLEG): container finished" podID="23452946-1048-4f09-a637-3f2e3fa9af17" containerID="87ce1694f6d9d220015b27a238eacc181c5b49d2c75bbf9e835ec4340700dd58" exitCode=0 Dec 05 08:19:05 crc kubenswrapper[4780]: I1205 08:19:05.681337 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" event={"ID":"23452946-1048-4f09-a637-3f2e3fa9af17","Type":"ContainerDied","Data":"87ce1694f6d9d220015b27a238eacc181c5b49d2c75bbf9e835ec4340700dd58"} Dec 05 08:19:05 crc kubenswrapper[4780]: I1205 08:19:05.681388 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" event={"ID":"23452946-1048-4f09-a637-3f2e3fa9af17","Type":"ContainerStarted","Data":"10d78c785d127dab3c1706fc1fa89434563e0c72ed818b0c2d705ddb83213817"} Dec 05 08:19:06 crc kubenswrapper[4780]: I1205 08:19:06.145467 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:19:06 crc kubenswrapper[4780]: I1205 08:19:06.695752 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"349e89f9246808105ec0ea65aea43e2605c8616dad9fd9a33f62e7fb3ca35e96"} Dec 05 08:19:06 crc kubenswrapper[4780]: I1205 08:19:06.700990 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" event={"ID":"23452946-1048-4f09-a637-3f2e3fa9af17","Type":"ContainerStarted","Data":"ad7861383c538cf57d64e940aa48ff781dc39f76083750a28d2099729d7f7db9"} Dec 05 08:19:06 crc kubenswrapper[4780]: I1205 08:19:06.701230 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:06 crc kubenswrapper[4780]: I1205 08:19:06.791907 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" podStartSLOduration=3.791858222 podStartE2EDuration="3.791858222s" podCreationTimestamp="2025-12-05 08:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:19:06.784165902 +0000 UTC m=+5580.853682244" watchObservedRunningTime="2025-12-05 08:19:06.791858222 +0000 UTC m=+5580.861374554" Dec 05 08:19:08 crc kubenswrapper[4780]: I1205 08:19:08.719545 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lxtzh" event={"ID":"90e102e1-771d-483a-bb67-06a66c885bb6","Type":"ContainerStarted","Data":"a7e5bb34ace7ede46381fdd22b9329fdce20dc174f0a9a59e4296ba0297b6e53"} Dec 05 08:19:08 crc kubenswrapper[4780]: I1205 08:19:08.741306 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lxtzh" podStartSLOduration=2.227500393 podStartE2EDuration="5.741287594s" podCreationTimestamp="2025-12-05 08:19:03 +0000 UTC" firstStartedPulling="2025-12-05 08:19:04.87918134 +0000 UTC m=+5578.948697672" lastFinishedPulling="2025-12-05 08:19:08.392968541 +0000 UTC m=+5582.462484873" observedRunningTime="2025-12-05 08:19:08.736093484 +0000 UTC m=+5582.805609816" watchObservedRunningTime="2025-12-05 08:19:08.741287594 +0000 UTC m=+5582.810803926" Dec 05 08:19:10 crc kubenswrapper[4780]: I1205 08:19:10.765019 4780 generic.go:334] "Generic (PLEG): container finished" podID="90e102e1-771d-483a-bb67-06a66c885bb6" containerID="a7e5bb34ace7ede46381fdd22b9329fdce20dc174f0a9a59e4296ba0297b6e53" exitCode=0 Dec 05 08:19:10 crc kubenswrapper[4780]: I1205 08:19:10.765108 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lxtzh" event={"ID":"90e102e1-771d-483a-bb67-06a66c885bb6","Type":"ContainerDied","Data":"a7e5bb34ace7ede46381fdd22b9329fdce20dc174f0a9a59e4296ba0297b6e53"} Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.164404 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.285473 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-config-data\") pod \"90e102e1-771d-483a-bb67-06a66c885bb6\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.285542 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90e102e1-771d-483a-bb67-06a66c885bb6-logs\") pod \"90e102e1-771d-483a-bb67-06a66c885bb6\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.285572 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-combined-ca-bundle\") pod \"90e102e1-771d-483a-bb67-06a66c885bb6\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.285638 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq28r\" (UniqueName: \"kubernetes.io/projected/90e102e1-771d-483a-bb67-06a66c885bb6-kube-api-access-hq28r\") pod \"90e102e1-771d-483a-bb67-06a66c885bb6\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.285758 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-scripts\") pod \"90e102e1-771d-483a-bb67-06a66c885bb6\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.285797 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e102e1-771d-483a-bb67-06a66c885bb6-logs" (OuterVolumeSpecName: "logs") pod "90e102e1-771d-483a-bb67-06a66c885bb6" (UID: "90e102e1-771d-483a-bb67-06a66c885bb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.286587 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90e102e1-771d-483a-bb67-06a66c885bb6-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.290596 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e102e1-771d-483a-bb67-06a66c885bb6-kube-api-access-hq28r" (OuterVolumeSpecName: "kube-api-access-hq28r") pod "90e102e1-771d-483a-bb67-06a66c885bb6" (UID: "90e102e1-771d-483a-bb67-06a66c885bb6"). InnerVolumeSpecName "kube-api-access-hq28r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.290679 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-scripts" (OuterVolumeSpecName: "scripts") pod "90e102e1-771d-483a-bb67-06a66c885bb6" (UID: "90e102e1-771d-483a-bb67-06a66c885bb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:19:12 crc kubenswrapper[4780]: E1205 08:19:12.308428 4780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-config-data podName:90e102e1-771d-483a-bb67-06a66c885bb6 nodeName:}" failed. No retries permitted until 2025-12-05 08:19:12.808396396 +0000 UTC m=+5586.877912728 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-config-data") pod "90e102e1-771d-483a-bb67-06a66c885bb6" (UID: "90e102e1-771d-483a-bb67-06a66c885bb6") : error deleting /var/lib/kubelet/pods/90e102e1-771d-483a-bb67-06a66c885bb6/volume-subpaths: remove /var/lib/kubelet/pods/90e102e1-771d-483a-bb67-06a66c885bb6/volume-subpaths: no such file or directory Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.311389 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90e102e1-771d-483a-bb67-06a66c885bb6" (UID: "90e102e1-771d-483a-bb67-06a66c885bb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.388718 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.388755 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq28r\" (UniqueName: \"kubernetes.io/projected/90e102e1-771d-483a-bb67-06a66c885bb6-kube-api-access-hq28r\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.388767 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.781807 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lxtzh" event={"ID":"90e102e1-771d-483a-bb67-06a66c885bb6","Type":"ContainerDied","Data":"1ccc19b703f225b0cb08fca16878986902d17ea73ed00d24bab999bc50a79a91"} Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.782072 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ccc19b703f225b0cb08fca16878986902d17ea73ed00d24bab999bc50a79a91" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.781908 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lxtzh" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.870563 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6d948d4648-j8s68"] Dec 05 08:19:12 crc kubenswrapper[4780]: E1205 08:19:12.870990 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e102e1-771d-483a-bb67-06a66c885bb6" containerName="placement-db-sync" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.871014 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e102e1-771d-483a-bb67-06a66c885bb6" containerName="placement-db-sync" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.871175 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e102e1-771d-483a-bb67-06a66c885bb6" containerName="placement-db-sync" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.872227 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.874479 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.874540 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.887191 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d948d4648-j8s68"] Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.907451 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-config-data\") pod \"90e102e1-771d-483a-bb67-06a66c885bb6\" (UID: \"90e102e1-771d-483a-bb67-06a66c885bb6\") " Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.907824 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qk26\" (UniqueName: \"kubernetes.io/projected/ad846df1-a795-4eb2-a063-e7f92b916f78-kube-api-access-4qk26\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.907865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-scripts\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.907935 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-config-data\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.907979 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad846df1-a795-4eb2-a063-e7f92b916f78-logs\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.908140 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-internal-tls-certs\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.908254 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-combined-ca-bundle\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.908281 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-public-tls-certs\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:12 crc kubenswrapper[4780]: I1205 08:19:12.916288 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-config-data" (OuterVolumeSpecName: "config-data") pod "90e102e1-771d-483a-bb67-06a66c885bb6" (UID: "90e102e1-771d-483a-bb67-06a66c885bb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.009334 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-config-data\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.009398 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad846df1-a795-4eb2-a063-e7f92b916f78-logs\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.009454 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-internal-tls-certs\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.009502 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-combined-ca-bundle\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.009525 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-public-tls-certs\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.009571 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qk26\" (UniqueName: \"kubernetes.io/projected/ad846df1-a795-4eb2-a063-e7f92b916f78-kube-api-access-4qk26\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.009595 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-scripts\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.009647 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e102e1-771d-483a-bb67-06a66c885bb6-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.010061 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad846df1-a795-4eb2-a063-e7f92b916f78-logs\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.012787 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-scripts\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.013154 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-public-tls-certs\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.014358 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-internal-tls-certs\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.014543 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-config-data\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.015976 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad846df1-a795-4eb2-a063-e7f92b916f78-combined-ca-bundle\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.026271 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qk26\" (UniqueName: \"kubernetes.io/projected/ad846df1-a795-4eb2-a063-e7f92b916f78-kube-api-access-4qk26\") pod \"placement-6d948d4648-j8s68\" (UID: \"ad846df1-a795-4eb2-a063-e7f92b916f78\") " pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.197427 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.635774 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d948d4648-j8s68"] Dec 05 08:19:13 crc kubenswrapper[4780]: I1205 08:19:13.791045 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d948d4648-j8s68" event={"ID":"ad846df1-a795-4eb2-a063-e7f92b916f78","Type":"ContainerStarted","Data":"e2bdf1ff19df86ccd8ee82d9e1b18ff2f0a52cb9449672074ba25196662b3195"} Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.209142 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.311828 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67984c8945-l4jmd"] Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.312137 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" podUID="ae0b581c-b4ba-493a-a0b0-b309c0e18c46" containerName="dnsmasq-dns" containerID="cri-o://bf4b59ac7e3470e0773e802e944c8190cd8f71d1fc2543bc1ed726b473ced0ec" gracePeriod=10 Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.821616 4780 generic.go:334] "Generic (PLEG): container finished" podID="ae0b581c-b4ba-493a-a0b0-b309c0e18c46" containerID="bf4b59ac7e3470e0773e802e944c8190cd8f71d1fc2543bc1ed726b473ced0ec" exitCode=0 Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.822107 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" event={"ID":"ae0b581c-b4ba-493a-a0b0-b309c0e18c46","Type":"ContainerDied","Data":"bf4b59ac7e3470e0773e802e944c8190cd8f71d1fc2543bc1ed726b473ced0ec"} Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.822135 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" event={"ID":"ae0b581c-b4ba-493a-a0b0-b309c0e18c46","Type":"ContainerDied","Data":"6d99bd4f9fda67b0a31e646d2d2db83672e221c98ef2e620bc1a95cafc26f27e"} Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.822145 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d99bd4f9fda67b0a31e646d2d2db83672e221c98ef2e620bc1a95cafc26f27e" Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.831061 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d948d4648-j8s68" event={"ID":"ad846df1-a795-4eb2-a063-e7f92b916f78","Type":"ContainerStarted","Data":"9b40a61162d04b60bdaa92490cbddb7ee2e92629d8f57a911637bb3e10b40000"} Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.831109 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d948d4648-j8s68" event={"ID":"ad846df1-a795-4eb2-a063-e7f92b916f78","Type":"ContainerStarted","Data":"9c24f977ead329536b9abfe013688bbcf4675d53ddd90b7d6b3e2ad432263606"} Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.831124 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.831166 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.858060 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6d948d4648-j8s68" podStartSLOduration=2.858035033 podStartE2EDuration="2.858035033s" podCreationTimestamp="2025-12-05 08:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:19:14.857250622 +0000 UTC m=+5588.926766964" watchObservedRunningTime="2025-12-05 08:19:14.858035033 +0000 UTC m=+5588.927551365" Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.875515 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.944161 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-nb\") pod \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.944218 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-config\") pod \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.944249 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k75hx\" (UniqueName: \"kubernetes.io/projected/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-kube-api-access-k75hx\") pod \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.944293 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-sb\") pod \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.944374 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-dns-svc\") pod \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\" (UID: \"ae0b581c-b4ba-493a-a0b0-b309c0e18c46\") " Dec 05 08:19:14 crc kubenswrapper[4780]: I1205 08:19:14.952293 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-kube-api-access-k75hx" (OuterVolumeSpecName: "kube-api-access-k75hx") pod "ae0b581c-b4ba-493a-a0b0-b309c0e18c46" (UID: "ae0b581c-b4ba-493a-a0b0-b309c0e18c46"). InnerVolumeSpecName "kube-api-access-k75hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.011563 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae0b581c-b4ba-493a-a0b0-b309c0e18c46" (UID: "ae0b581c-b4ba-493a-a0b0-b309c0e18c46"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.013488 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-config" (OuterVolumeSpecName: "config") pod "ae0b581c-b4ba-493a-a0b0-b309c0e18c46" (UID: "ae0b581c-b4ba-493a-a0b0-b309c0e18c46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.014413 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae0b581c-b4ba-493a-a0b0-b309c0e18c46" (UID: "ae0b581c-b4ba-493a-a0b0-b309c0e18c46"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.015541 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae0b581c-b4ba-493a-a0b0-b309c0e18c46" (UID: "ae0b581c-b4ba-493a-a0b0-b309c0e18c46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.046563 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.046608 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.046621 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k75hx\" (UniqueName: \"kubernetes.io/projected/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-kube-api-access-k75hx\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.046632 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.046641 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae0b581c-b4ba-493a-a0b0-b309c0e18c46-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.839278 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67984c8945-l4jmd" Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.872856 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67984c8945-l4jmd"] Dec 05 08:19:15 crc kubenswrapper[4780]: I1205 08:19:15.880360 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67984c8945-l4jmd"] Dec 05 08:19:16 crc kubenswrapper[4780]: I1205 08:19:16.150830 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0b581c-b4ba-493a-a0b0-b309c0e18c46" path="/var/lib/kubelet/pods/ae0b581c-b4ba-493a-a0b0-b309c0e18c46/volumes" Dec 05 08:19:44 crc kubenswrapper[4780]: I1205 08:19:44.330798 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:19:44 crc kubenswrapper[4780]: I1205 08:19:44.382592 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d948d4648-j8s68" Dec 05 08:20:05 crc kubenswrapper[4780]: E1205 08:20:05.452652 4780 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.177:58496->38.102.83.177:43955: read tcp 38.102.83.177:58496->38.102.83.177:43955: read: connection reset by peer Dec 05 08:20:07 crc kubenswrapper[4780]: I1205 08:20:07.892201 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-sdcrj"] Dec 05 08:20:07 crc kubenswrapper[4780]: E1205 08:20:07.892926 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0b581c-b4ba-493a-a0b0-b309c0e18c46" containerName="init" Dec 05 08:20:07 crc kubenswrapper[4780]: I1205 08:20:07.892943 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0b581c-b4ba-493a-a0b0-b309c0e18c46" containerName="init" Dec 05 08:20:07 crc kubenswrapper[4780]: E1205 08:20:07.892956 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0b581c-b4ba-493a-a0b0-b309c0e18c46" containerName="dnsmasq-dns" Dec 05 08:20:07 crc kubenswrapper[4780]: I1205 08:20:07.892964 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0b581c-b4ba-493a-a0b0-b309c0e18c46" containerName="dnsmasq-dns" Dec 05 08:20:07 crc kubenswrapper[4780]: I1205 08:20:07.893189 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0b581c-b4ba-493a-a0b0-b309c0e18c46" containerName="dnsmasq-dns" Dec 05 08:20:07 crc kubenswrapper[4780]: I1205 08:20:07.893986 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sdcrj" Dec 05 08:20:07 crc kubenswrapper[4780]: I1205 08:20:07.904972 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sdcrj"] Dec 05 08:20:07 crc kubenswrapper[4780]: I1205 08:20:07.993173 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lp4xp"] Dec 05 08:20:07 crc kubenswrapper[4780]: I1205 08:20:07.994768 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lp4xp" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.005543 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-52dd-account-create-update-9lpgb"] Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.006971 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52dd-account-create-update-9lpgb" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.009108 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.013554 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lp4xp"] Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.029367 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-52dd-account-create-update-9lpgb"] Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.055613 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmqld\" (UniqueName: \"kubernetes.io/projected/374b863a-0346-414d-af63-bd8616e4df7e-kube-api-access-dmqld\") pod \"nova-cell0-db-create-lp4xp\" (UID: \"374b863a-0346-414d-af63-bd8616e4df7e\") " pod="openstack/nova-cell0-db-create-lp4xp" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.055676 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ppp\" (UniqueName: \"kubernetes.io/projected/2ca1dce6-700e-4805-918e-d62bec2c0fb0-kube-api-access-b9ppp\") pod \"nova-api-52dd-account-create-update-9lpgb\" (UID: \"2ca1dce6-700e-4805-918e-d62bec2c0fb0\") " pod="openstack/nova-api-52dd-account-create-update-9lpgb" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.055707 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/374b863a-0346-414d-af63-bd8616e4df7e-operator-scripts\") pod \"nova-cell0-db-create-lp4xp\" (UID: \"374b863a-0346-414d-af63-bd8616e4df7e\") " pod="openstack/nova-cell0-db-create-lp4xp" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.055767 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1ea4e7f-8c3d-4bee-923f-0e22234099be-operator-scripts\") pod \"nova-api-db-create-sdcrj\" (UID: \"e1ea4e7f-8c3d-4bee-923f-0e22234099be\") " pod="openstack/nova-api-db-create-sdcrj" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.055842 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dk8p\" (UniqueName: \"kubernetes.io/projected/e1ea4e7f-8c3d-4bee-923f-0e22234099be-kube-api-access-2dk8p\") pod \"nova-api-db-create-sdcrj\" (UID: \"e1ea4e7f-8c3d-4bee-923f-0e22234099be\") " pod="openstack/nova-api-db-create-sdcrj" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.055866 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ca1dce6-700e-4805-918e-d62bec2c0fb0-operator-scripts\") pod \"nova-api-52dd-account-create-update-9lpgb\" (UID: \"2ca1dce6-700e-4805-918e-d62bec2c0fb0\") " pod="openstack/nova-api-52dd-account-create-update-9lpgb" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.094614 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-c8fz9"] Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.099078 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c8fz9" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.122713 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c8fz9"] Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.158059 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dk8p\" (UniqueName: \"kubernetes.io/projected/e1ea4e7f-8c3d-4bee-923f-0e22234099be-kube-api-access-2dk8p\") pod \"nova-api-db-create-sdcrj\" (UID: \"e1ea4e7f-8c3d-4bee-923f-0e22234099be\") " pod="openstack/nova-api-db-create-sdcrj" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.158106 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ca1dce6-700e-4805-918e-d62bec2c0fb0-operator-scripts\") pod \"nova-api-52dd-account-create-update-9lpgb\" (UID: \"2ca1dce6-700e-4805-918e-d62bec2c0fb0\") " pod="openstack/nova-api-52dd-account-create-update-9lpgb" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.158175 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqld\" (UniqueName: \"kubernetes.io/projected/374b863a-0346-414d-af63-bd8616e4df7e-kube-api-access-dmqld\") pod \"nova-cell0-db-create-lp4xp\" (UID: \"374b863a-0346-414d-af63-bd8616e4df7e\") " pod="openstack/nova-cell0-db-create-lp4xp" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.158210 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9ppp\" (UniqueName: \"kubernetes.io/projected/2ca1dce6-700e-4805-918e-d62bec2c0fb0-kube-api-access-b9ppp\") pod \"nova-api-52dd-account-create-update-9lpgb\" (UID: \"2ca1dce6-700e-4805-918e-d62bec2c0fb0\") " pod="openstack/nova-api-52dd-account-create-update-9lpgb" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.158230 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/374b863a-0346-414d-af63-bd8616e4df7e-operator-scripts\") pod \"nova-cell0-db-create-lp4xp\" (UID: \"374b863a-0346-414d-af63-bd8616e4df7e\") " pod="openstack/nova-cell0-db-create-lp4xp" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.158286 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1ea4e7f-8c3d-4bee-923f-0e22234099be-operator-scripts\") pod \"nova-api-db-create-sdcrj\" (UID: \"e1ea4e7f-8c3d-4bee-923f-0e22234099be\") " pod="openstack/nova-api-db-create-sdcrj" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.159049 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1ea4e7f-8c3d-4bee-923f-0e22234099be-operator-scripts\") pod \"nova-api-db-create-sdcrj\" (UID: \"e1ea4e7f-8c3d-4bee-923f-0e22234099be\") " pod="openstack/nova-api-db-create-sdcrj" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.159812 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ca1dce6-700e-4805-918e-d62bec2c0fb0-operator-scripts\") pod \"nova-api-52dd-account-create-update-9lpgb\" (UID: \"2ca1dce6-700e-4805-918e-d62bec2c0fb0\") " pod="openstack/nova-api-52dd-account-create-update-9lpgb" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.160557 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/374b863a-0346-414d-af63-bd8616e4df7e-operator-scripts\") pod \"nova-cell0-db-create-lp4xp\" (UID: \"374b863a-0346-414d-af63-bd8616e4df7e\") " pod="openstack/nova-cell0-db-create-lp4xp" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.201734 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dk8p\" (UniqueName: \"kubernetes.io/projected/e1ea4e7f-8c3d-4bee-923f-0e22234099be-kube-api-access-2dk8p\") pod \"nova-api-db-create-sdcrj\" (UID: \"e1ea4e7f-8c3d-4bee-923f-0e22234099be\") " pod="openstack/nova-api-db-create-sdcrj" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.204570 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9ppp\" (UniqueName: \"kubernetes.io/projected/2ca1dce6-700e-4805-918e-d62bec2c0fb0-kube-api-access-b9ppp\") pod \"nova-api-52dd-account-create-update-9lpgb\" (UID: \"2ca1dce6-700e-4805-918e-d62bec2c0fb0\") " pod="openstack/nova-api-52dd-account-create-update-9lpgb" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.232855 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-53f7-account-create-update-zgjlq"] Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.233968 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.234405 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sdcrj" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.235728 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmqld\" (UniqueName: \"kubernetes.io/projected/374b863a-0346-414d-af63-bd8616e4df7e-kube-api-access-dmqld\") pod \"nova-cell0-db-create-lp4xp\" (UID: \"374b863a-0346-414d-af63-bd8616e4df7e\") " pod="openstack/nova-cell0-db-create-lp4xp" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.246162 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.262270 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b95e15f-3b23-42f5-b234-1adea6f07bbb-operator-scripts\") pod \"nova-cell1-db-create-c8fz9\" (UID: \"7b95e15f-3b23-42f5-b234-1adea6f07bbb\") " pod="openstack/nova-cell1-db-create-c8fz9" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.262705 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7knlz\" (UniqueName: \"kubernetes.io/projected/7b95e15f-3b23-42f5-b234-1adea6f07bbb-kube-api-access-7knlz\") pod \"nova-cell1-db-create-c8fz9\" (UID: \"7b95e15f-3b23-42f5-b234-1adea6f07bbb\") " pod="openstack/nova-cell1-db-create-c8fz9" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.311948 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-53f7-account-create-update-zgjlq"] Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.313107 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lp4xp" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.333547 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52dd-account-create-update-9lpgb" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.368396 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6x9\" (UniqueName: \"kubernetes.io/projected/1e507e08-9c40-444f-8615-23285790d5fe-kube-api-access-2m6x9\") pod \"nova-cell0-53f7-account-create-update-zgjlq\" (UID: \"1e507e08-9c40-444f-8615-23285790d5fe\") " pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.368661 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b95e15f-3b23-42f5-b234-1adea6f07bbb-operator-scripts\") pod \"nova-cell1-db-create-c8fz9\" (UID: \"7b95e15f-3b23-42f5-b234-1adea6f07bbb\") " pod="openstack/nova-cell1-db-create-c8fz9" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.368822 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e507e08-9c40-444f-8615-23285790d5fe-operator-scripts\") pod \"nova-cell0-53f7-account-create-update-zgjlq\" (UID: \"1e507e08-9c40-444f-8615-23285790d5fe\") " pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.368952 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7knlz\" (UniqueName: \"kubernetes.io/projected/7b95e15f-3b23-42f5-b234-1adea6f07bbb-kube-api-access-7knlz\") pod \"nova-cell1-db-create-c8fz9\" (UID: \"7b95e15f-3b23-42f5-b234-1adea6f07bbb\") " pod="openstack/nova-cell1-db-create-c8fz9" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.370189 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b95e15f-3b23-42f5-b234-1adea6f07bbb-operator-scripts\") pod \"nova-cell1-db-create-c8fz9\" (UID: \"7b95e15f-3b23-42f5-b234-1adea6f07bbb\") " pod="openstack/nova-cell1-db-create-c8fz9" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.396115 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7knlz\" (UniqueName: \"kubernetes.io/projected/7b95e15f-3b23-42f5-b234-1adea6f07bbb-kube-api-access-7knlz\") pod \"nova-cell1-db-create-c8fz9\" (UID: \"7b95e15f-3b23-42f5-b234-1adea6f07bbb\") " pod="openstack/nova-cell1-db-create-c8fz9" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.417784 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c8fz9" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.471343 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m6x9\" (UniqueName: \"kubernetes.io/projected/1e507e08-9c40-444f-8615-23285790d5fe-kube-api-access-2m6x9\") pod \"nova-cell0-53f7-account-create-update-zgjlq\" (UID: \"1e507e08-9c40-444f-8615-23285790d5fe\") " pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.471438 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e507e08-9c40-444f-8615-23285790d5fe-operator-scripts\") pod \"nova-cell0-53f7-account-create-update-zgjlq\" (UID: \"1e507e08-9c40-444f-8615-23285790d5fe\") " pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.472717 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e507e08-9c40-444f-8615-23285790d5fe-operator-scripts\") pod \"nova-cell0-53f7-account-create-update-zgjlq\" (UID: \"1e507e08-9c40-444f-8615-23285790d5fe\") " pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.514644 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m6x9\" (UniqueName: \"kubernetes.io/projected/1e507e08-9c40-444f-8615-23285790d5fe-kube-api-access-2m6x9\") pod \"nova-cell0-53f7-account-create-update-zgjlq\" (UID: \"1e507e08-9c40-444f-8615-23285790d5fe\") " pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.523024 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0f9a-account-create-update-8fbjh"] Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.524764 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.532242 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.533631 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0f9a-account-create-update-8fbjh"] Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.572207 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78078629-9079-4b5a-91d2-0aed37d1e64a-operator-scripts\") pod \"nova-cell1-0f9a-account-create-update-8fbjh\" (UID: \"78078629-9079-4b5a-91d2-0aed37d1e64a\") " pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.572262 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8pr\" (UniqueName: \"kubernetes.io/projected/78078629-9079-4b5a-91d2-0aed37d1e64a-kube-api-access-gg8pr\") pod \"nova-cell1-0f9a-account-create-update-8fbjh\" (UID: \"78078629-9079-4b5a-91d2-0aed37d1e64a\") " pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.673800 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78078629-9079-4b5a-91d2-0aed37d1e64a-operator-scripts\") pod \"nova-cell1-0f9a-account-create-update-8fbjh\" (UID: \"78078629-9079-4b5a-91d2-0aed37d1e64a\") " pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.673867 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg8pr\" (UniqueName: \"kubernetes.io/projected/78078629-9079-4b5a-91d2-0aed37d1e64a-kube-api-access-gg8pr\") pod \"nova-cell1-0f9a-account-create-update-8fbjh\" (UID: \"78078629-9079-4b5a-91d2-0aed37d1e64a\") " pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.674769 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78078629-9079-4b5a-91d2-0aed37d1e64a-operator-scripts\") pod \"nova-cell1-0f9a-account-create-update-8fbjh\" (UID: \"78078629-9079-4b5a-91d2-0aed37d1e64a\") " pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.690252 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg8pr\" (UniqueName: \"kubernetes.io/projected/78078629-9079-4b5a-91d2-0aed37d1e64a-kube-api-access-gg8pr\") pod \"nova-cell1-0f9a-account-create-update-8fbjh\" (UID: \"78078629-9079-4b5a-91d2-0aed37d1e64a\") " pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.758187 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.877766 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.936269 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-52dd-account-create-update-9lpgb"] Dec 05 08:20:08 crc kubenswrapper[4780]: I1205 08:20:08.985395 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sdcrj"] Dec 05 08:20:08 crc kubenswrapper[4780]: W1205 08:20:08.991165 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1ea4e7f_8c3d_4bee_923f_0e22234099be.slice/crio-21c1cce783e2d3e875902bcc6b7d23266801cadb024ac2d383c025331d3eae3a WatchSource:0}: Error finding container 21c1cce783e2d3e875902bcc6b7d23266801cadb024ac2d383c025331d3eae3a: Status 404 returned error can't find the container with id 21c1cce783e2d3e875902bcc6b7d23266801cadb024ac2d383c025331d3eae3a Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.056411 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c8fz9"] Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.068289 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lp4xp"] Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.246134 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-53f7-account-create-update-zgjlq"] Dec 05 08:20:09 crc kubenswrapper[4780]: W1205 08:20:09.261102 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e507e08_9c40_444f_8615_23285790d5fe.slice/crio-4807d8ae7bd3e4c1fda3a2414c53a4af898b0aa7cff73994fe96e500f01dde3f WatchSource:0}: Error finding container 4807d8ae7bd3e4c1fda3a2414c53a4af898b0aa7cff73994fe96e500f01dde3f: Status 404 returned error can't find the container with id 4807d8ae7bd3e4c1fda3a2414c53a4af898b0aa7cff73994fe96e500f01dde3f Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.382192 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0f9a-account-create-update-8fbjh"] Dec 05 08:20:09 crc kubenswrapper[4780]: W1205 08:20:09.383223 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78078629_9079_4b5a_91d2_0aed37d1e64a.slice/crio-97857aa26cb5af99b182ae739aaa9132525bb61d73d870d293ef75377e0ef955 WatchSource:0}: Error finding container 97857aa26cb5af99b182ae739aaa9132525bb61d73d870d293ef75377e0ef955: Status 404 returned error can't find the container with id 97857aa26cb5af99b182ae739aaa9132525bb61d73d870d293ef75377e0ef955 Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.439751 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" event={"ID":"1e507e08-9c40-444f-8615-23285790d5fe","Type":"ContainerStarted","Data":"4807d8ae7bd3e4c1fda3a2414c53a4af898b0aa7cff73994fe96e500f01dde3f"} Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.441350 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c8fz9" event={"ID":"7b95e15f-3b23-42f5-b234-1adea6f07bbb","Type":"ContainerStarted","Data":"2c8f145e1bdffe705280b3ff93f7c255bcc1715f071f81571e86c745098864dd"} Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.441371 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c8fz9" event={"ID":"7b95e15f-3b23-42f5-b234-1adea6f07bbb","Type":"ContainerStarted","Data":"eb2edc5db8eed9b1c0c32e0b84c7f02493ba5c8fe2e310164460e902cb4748c3"} Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.445591 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-52dd-account-create-update-9lpgb" event={"ID":"2ca1dce6-700e-4805-918e-d62bec2c0fb0","Type":"ContainerStarted","Data":"267a9697c484d4c8e24de5de1c07bd015467210c1cdcf48e30167cf23d25fd1b"} Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.445621 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-52dd-account-create-update-9lpgb" event={"ID":"2ca1dce6-700e-4805-918e-d62bec2c0fb0","Type":"ContainerStarted","Data":"f17dadfe6772d64f706b994276212a653a146032492dd6141cdace4e87080c88"} Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.450115 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" event={"ID":"78078629-9079-4b5a-91d2-0aed37d1e64a","Type":"ContainerStarted","Data":"97857aa26cb5af99b182ae739aaa9132525bb61d73d870d293ef75377e0ef955"} Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.452474 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lp4xp" event={"ID":"374b863a-0346-414d-af63-bd8616e4df7e","Type":"ContainerStarted","Data":"3cc3b352cea40ae8f1b367624f0102c85c40787080be9d78d0d7053a34870666"} Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.452502 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lp4xp" event={"ID":"374b863a-0346-414d-af63-bd8616e4df7e","Type":"ContainerStarted","Data":"8a07126722a166cc3e10315ba9e478e7dc4ac9e3684d8a32b813703e7bb952bb"} Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.465569 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-c8fz9" podStartSLOduration=1.465552761 podStartE2EDuration="1.465552761s" podCreationTimestamp="2025-12-05 08:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:09.459383883 +0000 UTC m=+5643.528900215" watchObservedRunningTime="2025-12-05 08:20:09.465552761 +0000 UTC m=+5643.535069083" Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.475454 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sdcrj" event={"ID":"e1ea4e7f-8c3d-4bee-923f-0e22234099be","Type":"ContainerStarted","Data":"30071c05bd2b038754e38b35f67afd80f576a047b818222f2aa03ceb933c2a2a"} Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.475497 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sdcrj" event={"ID":"e1ea4e7f-8c3d-4bee-923f-0e22234099be","Type":"ContainerStarted","Data":"21c1cce783e2d3e875902bcc6b7d23266801cadb024ac2d383c025331d3eae3a"} Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.508675 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-52dd-account-create-update-9lpgb" podStartSLOduration=2.5086512130000003 podStartE2EDuration="2.508651213s" podCreationTimestamp="2025-12-05 08:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:09.48827506 +0000 UTC m=+5643.557791392" watchObservedRunningTime="2025-12-05 08:20:09.508651213 +0000 UTC m=+5643.578167545" Dec 05 08:20:09 crc kubenswrapper[4780]: I1205 08:20:09.518410 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-lp4xp" podStartSLOduration=2.518388538 podStartE2EDuration="2.518388538s" podCreationTimestamp="2025-12-05 08:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:09.512824867 +0000 UTC m=+5643.582341199" watchObservedRunningTime="2025-12-05 08:20:09.518388538 +0000 UTC m=+5643.587904870" Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.490069 4780 generic.go:334] "Generic (PLEG): container finished" podID="1e507e08-9c40-444f-8615-23285790d5fe" containerID="a17cbb8ee2f84b3a5348e800730063f6b423a8d0dff60ff67b542891a475c429" exitCode=0 Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.490171 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" event={"ID":"1e507e08-9c40-444f-8615-23285790d5fe","Type":"ContainerDied","Data":"a17cbb8ee2f84b3a5348e800730063f6b423a8d0dff60ff67b542891a475c429"} Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.492818 4780 generic.go:334] "Generic (PLEG): container finished" podID="7b95e15f-3b23-42f5-b234-1adea6f07bbb" containerID="2c8f145e1bdffe705280b3ff93f7c255bcc1715f071f81571e86c745098864dd" exitCode=0 Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.493177 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c8fz9" event={"ID":"7b95e15f-3b23-42f5-b234-1adea6f07bbb","Type":"ContainerDied","Data":"2c8f145e1bdffe705280b3ff93f7c255bcc1715f071f81571e86c745098864dd"} Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.496624 4780 generic.go:334] "Generic (PLEG): container finished" podID="2ca1dce6-700e-4805-918e-d62bec2c0fb0" containerID="267a9697c484d4c8e24de5de1c07bd015467210c1cdcf48e30167cf23d25fd1b" exitCode=0 Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.496723 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-52dd-account-create-update-9lpgb" event={"ID":"2ca1dce6-700e-4805-918e-d62bec2c0fb0","Type":"ContainerDied","Data":"267a9697c484d4c8e24de5de1c07bd015467210c1cdcf48e30167cf23d25fd1b"} Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.498823 4780 generic.go:334] "Generic (PLEG): container finished" podID="78078629-9079-4b5a-91d2-0aed37d1e64a" containerID="fd2eabb2df3758980a775ca03436a67e44e417c936f138929ccc4d1421a56bd6" exitCode=0 Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.498859 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" event={"ID":"78078629-9079-4b5a-91d2-0aed37d1e64a","Type":"ContainerDied","Data":"fd2eabb2df3758980a775ca03436a67e44e417c936f138929ccc4d1421a56bd6"} Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.501061 4780 generic.go:334] "Generic (PLEG): container finished" podID="374b863a-0346-414d-af63-bd8616e4df7e" containerID="3cc3b352cea40ae8f1b367624f0102c85c40787080be9d78d0d7053a34870666" exitCode=0 Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.501131 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lp4xp" event={"ID":"374b863a-0346-414d-af63-bd8616e4df7e","Type":"ContainerDied","Data":"3cc3b352cea40ae8f1b367624f0102c85c40787080be9d78d0d7053a34870666"} Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.503014 4780 generic.go:334] "Generic (PLEG): container finished" podID="e1ea4e7f-8c3d-4bee-923f-0e22234099be" containerID="30071c05bd2b038754e38b35f67afd80f576a047b818222f2aa03ceb933c2a2a" exitCode=0 Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.503071 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sdcrj" event={"ID":"e1ea4e7f-8c3d-4bee-923f-0e22234099be","Type":"ContainerDied","Data":"30071c05bd2b038754e38b35f67afd80f576a047b818222f2aa03ceb933c2a2a"} Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.875522 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sdcrj" Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.932702 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1ea4e7f-8c3d-4bee-923f-0e22234099be-operator-scripts\") pod \"e1ea4e7f-8c3d-4bee-923f-0e22234099be\" (UID: \"e1ea4e7f-8c3d-4bee-923f-0e22234099be\") " Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.932822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dk8p\" (UniqueName: \"kubernetes.io/projected/e1ea4e7f-8c3d-4bee-923f-0e22234099be-kube-api-access-2dk8p\") pod \"e1ea4e7f-8c3d-4bee-923f-0e22234099be\" (UID: \"e1ea4e7f-8c3d-4bee-923f-0e22234099be\") " Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.933562 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1ea4e7f-8c3d-4bee-923f-0e22234099be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1ea4e7f-8c3d-4bee-923f-0e22234099be" (UID: "e1ea4e7f-8c3d-4bee-923f-0e22234099be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:20:10 crc kubenswrapper[4780]: I1205 08:20:10.938589 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ea4e7f-8c3d-4bee-923f-0e22234099be-kube-api-access-2dk8p" (OuterVolumeSpecName: "kube-api-access-2dk8p") pod "e1ea4e7f-8c3d-4bee-923f-0e22234099be" (UID: "e1ea4e7f-8c3d-4bee-923f-0e22234099be"). InnerVolumeSpecName "kube-api-access-2dk8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:11 crc kubenswrapper[4780]: I1205 08:20:11.034191 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dk8p\" (UniqueName: \"kubernetes.io/projected/e1ea4e7f-8c3d-4bee-923f-0e22234099be-kube-api-access-2dk8p\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:11 crc kubenswrapper[4780]: I1205 08:20:11.034463 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1ea4e7f-8c3d-4bee-923f-0e22234099be-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:11 crc kubenswrapper[4780]: I1205 08:20:11.517505 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sdcrj" Dec 05 08:20:11 crc kubenswrapper[4780]: I1205 08:20:11.535042 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sdcrj" event={"ID":"e1ea4e7f-8c3d-4bee-923f-0e22234099be","Type":"ContainerDied","Data":"21c1cce783e2d3e875902bcc6b7d23266801cadb024ac2d383c025331d3eae3a"} Dec 05 08:20:11 crc kubenswrapper[4780]: I1205 08:20:11.535086 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21c1cce783e2d3e875902bcc6b7d23266801cadb024ac2d383c025331d3eae3a" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.058145 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.164385 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.165020 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e507e08-9c40-444f-8615-23285790d5fe-operator-scripts\") pod \"1e507e08-9c40-444f-8615-23285790d5fe\" (UID: \"1e507e08-9c40-444f-8615-23285790d5fe\") " Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.165192 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m6x9\" (UniqueName: \"kubernetes.io/projected/1e507e08-9c40-444f-8615-23285790d5fe-kube-api-access-2m6x9\") pod \"1e507e08-9c40-444f-8615-23285790d5fe\" (UID: \"1e507e08-9c40-444f-8615-23285790d5fe\") " Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.166030 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e507e08-9c40-444f-8615-23285790d5fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e507e08-9c40-444f-8615-23285790d5fe" (UID: "1e507e08-9c40-444f-8615-23285790d5fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.166259 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e507e08-9c40-444f-8615-23285790d5fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.171060 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lp4xp" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.173099 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e507e08-9c40-444f-8615-23285790d5fe-kube-api-access-2m6x9" (OuterVolumeSpecName: "kube-api-access-2m6x9") pod "1e507e08-9c40-444f-8615-23285790d5fe" (UID: "1e507e08-9c40-444f-8615-23285790d5fe"). InnerVolumeSpecName "kube-api-access-2m6x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.223867 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52dd-account-create-update-9lpgb" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.231162 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c8fz9" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.267466 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg8pr\" (UniqueName: \"kubernetes.io/projected/78078629-9079-4b5a-91d2-0aed37d1e64a-kube-api-access-gg8pr\") pod \"78078629-9079-4b5a-91d2-0aed37d1e64a\" (UID: \"78078629-9079-4b5a-91d2-0aed37d1e64a\") " Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.267647 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78078629-9079-4b5a-91d2-0aed37d1e64a-operator-scripts\") pod \"78078629-9079-4b5a-91d2-0aed37d1e64a\" (UID: \"78078629-9079-4b5a-91d2-0aed37d1e64a\") " Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.268456 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78078629-9079-4b5a-91d2-0aed37d1e64a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78078629-9079-4b5a-91d2-0aed37d1e64a" (UID: "78078629-9079-4b5a-91d2-0aed37d1e64a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.269906 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78078629-9079-4b5a-91d2-0aed37d1e64a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.270091 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m6x9\" (UniqueName: \"kubernetes.io/projected/1e507e08-9c40-444f-8615-23285790d5fe-kube-api-access-2m6x9\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.270660 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78078629-9079-4b5a-91d2-0aed37d1e64a-kube-api-access-gg8pr" (OuterVolumeSpecName: "kube-api-access-gg8pr") pod "78078629-9079-4b5a-91d2-0aed37d1e64a" (UID: "78078629-9079-4b5a-91d2-0aed37d1e64a"). InnerVolumeSpecName "kube-api-access-gg8pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.371176 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b95e15f-3b23-42f5-b234-1adea6f07bbb-operator-scripts\") pod \"7b95e15f-3b23-42f5-b234-1adea6f07bbb\" (UID: \"7b95e15f-3b23-42f5-b234-1adea6f07bbb\") " Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.371240 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ca1dce6-700e-4805-918e-d62bec2c0fb0-operator-scripts\") pod \"2ca1dce6-700e-4805-918e-d62bec2c0fb0\" (UID: \"2ca1dce6-700e-4805-918e-d62bec2c0fb0\") " Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.371405 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/374b863a-0346-414d-af63-bd8616e4df7e-operator-scripts\") pod \"374b863a-0346-414d-af63-bd8616e4df7e\" (UID: \"374b863a-0346-414d-af63-bd8616e4df7e\") " Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.371499 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9ppp\" (UniqueName: \"kubernetes.io/projected/2ca1dce6-700e-4805-918e-d62bec2c0fb0-kube-api-access-b9ppp\") pod \"2ca1dce6-700e-4805-918e-d62bec2c0fb0\" (UID: \"2ca1dce6-700e-4805-918e-d62bec2c0fb0\") " Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.371554 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7knlz\" (UniqueName: \"kubernetes.io/projected/7b95e15f-3b23-42f5-b234-1adea6f07bbb-kube-api-access-7knlz\") pod \"7b95e15f-3b23-42f5-b234-1adea6f07bbb\" (UID: \"7b95e15f-3b23-42f5-b234-1adea6f07bbb\") " Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.371649 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmqld\" (UniqueName: \"kubernetes.io/projected/374b863a-0346-414d-af63-bd8616e4df7e-kube-api-access-dmqld\") pod \"374b863a-0346-414d-af63-bd8616e4df7e\" (UID: \"374b863a-0346-414d-af63-bd8616e4df7e\") " Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.372213 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg8pr\" (UniqueName: \"kubernetes.io/projected/78078629-9079-4b5a-91d2-0aed37d1e64a-kube-api-access-gg8pr\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.372257 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b95e15f-3b23-42f5-b234-1adea6f07bbb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b95e15f-3b23-42f5-b234-1adea6f07bbb" (UID: "7b95e15f-3b23-42f5-b234-1adea6f07bbb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.372304 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374b863a-0346-414d-af63-bd8616e4df7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "374b863a-0346-414d-af63-bd8616e4df7e" (UID: "374b863a-0346-414d-af63-bd8616e4df7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.372413 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ca1dce6-700e-4805-918e-d62bec2c0fb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ca1dce6-700e-4805-918e-d62bec2c0fb0" (UID: "2ca1dce6-700e-4805-918e-d62bec2c0fb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.375727 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374b863a-0346-414d-af63-bd8616e4df7e-kube-api-access-dmqld" (OuterVolumeSpecName: "kube-api-access-dmqld") pod "374b863a-0346-414d-af63-bd8616e4df7e" (UID: "374b863a-0346-414d-af63-bd8616e4df7e"). InnerVolumeSpecName "kube-api-access-dmqld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.375904 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca1dce6-700e-4805-918e-d62bec2c0fb0-kube-api-access-b9ppp" (OuterVolumeSpecName: "kube-api-access-b9ppp") pod "2ca1dce6-700e-4805-918e-d62bec2c0fb0" (UID: "2ca1dce6-700e-4805-918e-d62bec2c0fb0"). InnerVolumeSpecName "kube-api-access-b9ppp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.375952 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b95e15f-3b23-42f5-b234-1adea6f07bbb-kube-api-access-7knlz" (OuterVolumeSpecName: "kube-api-access-7knlz") pod "7b95e15f-3b23-42f5-b234-1adea6f07bbb" (UID: "7b95e15f-3b23-42f5-b234-1adea6f07bbb"). InnerVolumeSpecName "kube-api-access-7knlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.475317 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/374b863a-0346-414d-af63-bd8616e4df7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.475395 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9ppp\" (UniqueName: \"kubernetes.io/projected/2ca1dce6-700e-4805-918e-d62bec2c0fb0-kube-api-access-b9ppp\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.475418 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7knlz\" (UniqueName: \"kubernetes.io/projected/7b95e15f-3b23-42f5-b234-1adea6f07bbb-kube-api-access-7knlz\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.475433 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmqld\" (UniqueName: \"kubernetes.io/projected/374b863a-0346-414d-af63-bd8616e4df7e-kube-api-access-dmqld\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.475448 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b95e15f-3b23-42f5-b234-1adea6f07bbb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.475462 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ca1dce6-700e-4805-918e-d62bec2c0fb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.528549 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.528550 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-53f7-account-create-update-zgjlq" event={"ID":"1e507e08-9c40-444f-8615-23285790d5fe","Type":"ContainerDied","Data":"4807d8ae7bd3e4c1fda3a2414c53a4af898b0aa7cff73994fe96e500f01dde3f"} Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.528591 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4807d8ae7bd3e4c1fda3a2414c53a4af898b0aa7cff73994fe96e500f01dde3f" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.530081 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c8fz9" event={"ID":"7b95e15f-3b23-42f5-b234-1adea6f07bbb","Type":"ContainerDied","Data":"eb2edc5db8eed9b1c0c32e0b84c7f02493ba5c8fe2e310164460e902cb4748c3"} Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.530120 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb2edc5db8eed9b1c0c32e0b84c7f02493ba5c8fe2e310164460e902cb4748c3" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.530138 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c8fz9" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.531112 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-52dd-account-create-update-9lpgb" event={"ID":"2ca1dce6-700e-4805-918e-d62bec2c0fb0","Type":"ContainerDied","Data":"f17dadfe6772d64f706b994276212a653a146032492dd6141cdace4e87080c88"} Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.531243 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f17dadfe6772d64f706b994276212a653a146032492dd6141cdace4e87080c88" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.531158 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-52dd-account-create-update-9lpgb" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.535592 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" event={"ID":"78078629-9079-4b5a-91d2-0aed37d1e64a","Type":"ContainerDied","Data":"97857aa26cb5af99b182ae739aaa9132525bb61d73d870d293ef75377e0ef955"} Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.535635 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97857aa26cb5af99b182ae739aaa9132525bb61d73d870d293ef75377e0ef955" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.535693 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0f9a-account-create-update-8fbjh" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.540900 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lp4xp" event={"ID":"374b863a-0346-414d-af63-bd8616e4df7e","Type":"ContainerDied","Data":"8a07126722a166cc3e10315ba9e478e7dc4ac9e3684d8a32b813703e7bb952bb"} Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.540972 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a07126722a166cc3e10315ba9e478e7dc4ac9e3684d8a32b813703e7bb952bb" Dec 05 08:20:12 crc kubenswrapper[4780]: I1205 08:20:12.541084 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lp4xp" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.204227 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-42ghh"] Dec 05 08:20:18 crc kubenswrapper[4780]: E1205 08:20:18.205242 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374b863a-0346-414d-af63-bd8616e4df7e" containerName="mariadb-database-create" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205255 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="374b863a-0346-414d-af63-bd8616e4df7e" containerName="mariadb-database-create" Dec 05 08:20:18 crc kubenswrapper[4780]: E1205 08:20:18.205274 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78078629-9079-4b5a-91d2-0aed37d1e64a" containerName="mariadb-account-create-update" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205282 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="78078629-9079-4b5a-91d2-0aed37d1e64a" containerName="mariadb-account-create-update" Dec 05 08:20:18 crc kubenswrapper[4780]: E1205 08:20:18.205292 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca1dce6-700e-4805-918e-d62bec2c0fb0" containerName="mariadb-account-create-update" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205299 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca1dce6-700e-4805-918e-d62bec2c0fb0" containerName="mariadb-account-create-update" Dec 05 08:20:18 crc kubenswrapper[4780]: E1205 08:20:18.205331 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ea4e7f-8c3d-4bee-923f-0e22234099be" containerName="mariadb-database-create" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205338 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ea4e7f-8c3d-4bee-923f-0e22234099be" containerName="mariadb-database-create" Dec 05 08:20:18 crc kubenswrapper[4780]: E1205 08:20:18.205348 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b95e15f-3b23-42f5-b234-1adea6f07bbb" containerName="mariadb-database-create" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205354 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b95e15f-3b23-42f5-b234-1adea6f07bbb" containerName="mariadb-database-create" Dec 05 08:20:18 crc kubenswrapper[4780]: E1205 08:20:18.205366 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e507e08-9c40-444f-8615-23285790d5fe" containerName="mariadb-account-create-update" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205372 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e507e08-9c40-444f-8615-23285790d5fe" containerName="mariadb-account-create-update" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205548 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="78078629-9079-4b5a-91d2-0aed37d1e64a" containerName="mariadb-account-create-update" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205558 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ea4e7f-8c3d-4bee-923f-0e22234099be" containerName="mariadb-database-create" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205567 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca1dce6-700e-4805-918e-d62bec2c0fb0" containerName="mariadb-account-create-update" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205582 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e507e08-9c40-444f-8615-23285790d5fe" containerName="mariadb-account-create-update" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205594 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="374b863a-0346-414d-af63-bd8616e4df7e" containerName="mariadb-database-create" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.205613 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b95e15f-3b23-42f5-b234-1adea6f07bbb" containerName="mariadb-database-create" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.206333 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.210338 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.210580 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.211300 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-n2lp8" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.217754 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-42ghh"] Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.317347 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2wfs\" (UniqueName: \"kubernetes.io/projected/d7138ace-d969-4040-aa34-a8f46b7a192f-kube-api-access-t2wfs\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.317458 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-config-data\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.317618 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-scripts\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.317831 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.419838 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2wfs\" (UniqueName: \"kubernetes.io/projected/d7138ace-d969-4040-aa34-a8f46b7a192f-kube-api-access-t2wfs\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.419982 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-config-data\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.420028 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-scripts\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.420114 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.428116 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-scripts\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.428431 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-config-data\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.428915 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.447238 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2wfs\" (UniqueName: \"kubernetes.io/projected/d7138ace-d969-4040-aa34-a8f46b7a192f-kube-api-access-t2wfs\") pod \"nova-cell0-conductor-db-sync-42ghh\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.529076 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:18 crc kubenswrapper[4780]: I1205 08:20:18.986809 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-42ghh"] Dec 05 08:20:19 crc kubenswrapper[4780]: I1205 08:20:19.604235 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-42ghh" event={"ID":"d7138ace-d969-4040-aa34-a8f46b7a192f","Type":"ContainerStarted","Data":"2866ea90ebc736bf8309a42f3736547929a3d0b1cc8b4eabc043fc7040a486f3"} Dec 05 08:20:28 crc kubenswrapper[4780]: I1205 08:20:28.734208 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-42ghh" event={"ID":"d7138ace-d969-4040-aa34-a8f46b7a192f","Type":"ContainerStarted","Data":"acd6e5223bbe7a7a95027b04af00dcb157ee4135568306dd58cab1e54b15a3ac"} Dec 05 08:20:28 crc kubenswrapper[4780]: I1205 08:20:28.754703 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-42ghh" podStartSLOduration=2.244829785 podStartE2EDuration="10.754685964s" podCreationTimestamp="2025-12-05 08:20:18 +0000 UTC" firstStartedPulling="2025-12-05 08:20:18.982506651 +0000 UTC m=+5653.052022983" lastFinishedPulling="2025-12-05 08:20:27.49236283 +0000 UTC m=+5661.561879162" observedRunningTime="2025-12-05 08:20:28.749292467 +0000 UTC m=+5662.818808799" watchObservedRunningTime="2025-12-05 08:20:28.754685964 +0000 UTC m=+5662.824202286" Dec 05 08:20:33 crc kubenswrapper[4780]: I1205 08:20:33.778697 4780 generic.go:334] "Generic (PLEG): container finished" podID="d7138ace-d969-4040-aa34-a8f46b7a192f" containerID="acd6e5223bbe7a7a95027b04af00dcb157ee4135568306dd58cab1e54b15a3ac" exitCode=0 Dec 05 08:20:33 crc kubenswrapper[4780]: I1205 08:20:33.778798 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-42ghh" event={"ID":"d7138ace-d969-4040-aa34-a8f46b7a192f","Type":"ContainerDied","Data":"acd6e5223bbe7a7a95027b04af00dcb157ee4135568306dd58cab1e54b15a3ac"} Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.189557 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.330737 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2wfs\" (UniqueName: \"kubernetes.io/projected/d7138ace-d969-4040-aa34-a8f46b7a192f-kube-api-access-t2wfs\") pod \"d7138ace-d969-4040-aa34-a8f46b7a192f\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.331095 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-config-data\") pod \"d7138ace-d969-4040-aa34-a8f46b7a192f\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.331230 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-scripts\") pod \"d7138ace-d969-4040-aa34-a8f46b7a192f\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.331416 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-combined-ca-bundle\") pod \"d7138ace-d969-4040-aa34-a8f46b7a192f\" (UID: \"d7138ace-d969-4040-aa34-a8f46b7a192f\") " Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.336772 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-scripts" (OuterVolumeSpecName: "scripts") pod "d7138ace-d969-4040-aa34-a8f46b7a192f" (UID: "d7138ace-d969-4040-aa34-a8f46b7a192f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.342117 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7138ace-d969-4040-aa34-a8f46b7a192f-kube-api-access-t2wfs" (OuterVolumeSpecName: "kube-api-access-t2wfs") pod "d7138ace-d969-4040-aa34-a8f46b7a192f" (UID: "d7138ace-d969-4040-aa34-a8f46b7a192f"). InnerVolumeSpecName "kube-api-access-t2wfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.356624 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-config-data" (OuterVolumeSpecName: "config-data") pod "d7138ace-d969-4040-aa34-a8f46b7a192f" (UID: "d7138ace-d969-4040-aa34-a8f46b7a192f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.356992 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7138ace-d969-4040-aa34-a8f46b7a192f" (UID: "d7138ace-d969-4040-aa34-a8f46b7a192f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.432636 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.432664 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.432677 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2wfs\" (UniqueName: \"kubernetes.io/projected/d7138ace-d969-4040-aa34-a8f46b7a192f-kube-api-access-t2wfs\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.432688 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7138ace-d969-4040-aa34-a8f46b7a192f-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.809231 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-42ghh" event={"ID":"d7138ace-d969-4040-aa34-a8f46b7a192f","Type":"ContainerDied","Data":"2866ea90ebc736bf8309a42f3736547929a3d0b1cc8b4eabc043fc7040a486f3"} Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.809496 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2866ea90ebc736bf8309a42f3736547929a3d0b1cc8b4eabc043fc7040a486f3" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.809340 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-42ghh" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.868812 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 08:20:35 crc kubenswrapper[4780]: E1205 08:20:35.872259 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7138ace-d969-4040-aa34-a8f46b7a192f" containerName="nova-cell0-conductor-db-sync" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.872348 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7138ace-d969-4040-aa34-a8f46b7a192f" containerName="nova-cell0-conductor-db-sync" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.872870 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7138ace-d969-4040-aa34-a8f46b7a192f" containerName="nova-cell0-conductor-db-sync" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.873845 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.876437 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.876462 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-n2lp8" Dec 05 08:20:35 crc kubenswrapper[4780]: I1205 08:20:35.887765 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.041826 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.042209 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.042343 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjps\" (UniqueName: \"kubernetes.io/projected/45254a92-70be-48cc-950a-683efef559d5-kube-api-access-xxjps\") pod \"nova-cell0-conductor-0\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.143895 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.143938 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.143973 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjps\" (UniqueName: \"kubernetes.io/projected/45254a92-70be-48cc-950a-683efef559d5-kube-api-access-xxjps\") pod \"nova-cell0-conductor-0\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.149636 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.149740 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.168923 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjps\" (UniqueName: \"kubernetes.io/projected/45254a92-70be-48cc-950a-683efef559d5-kube-api-access-xxjps\") pod \"nova-cell0-conductor-0\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.194692 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.629552 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.818661 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"45254a92-70be-48cc-950a-683efef559d5","Type":"ContainerStarted","Data":"5910ccedb52d5ea30c8819939f35f81059ca754293d49206e2afa17af16fa3f9"} Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.819028 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.819042 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"45254a92-70be-48cc-950a-683efef559d5","Type":"ContainerStarted","Data":"ba5484d669738ec533fd76f47335ca594c025072a64bdc17312a162cfc16293e"} Dec 05 08:20:36 crc kubenswrapper[4780]: I1205 08:20:36.846321 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.846299098 podStartE2EDuration="1.846299098s" podCreationTimestamp="2025-12-05 08:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:36.838547997 +0000 UTC m=+5670.908064319" watchObservedRunningTime="2025-12-05 08:20:36.846299098 +0000 UTC m=+5670.915815430" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.235221 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.809064 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-r6fqt"] Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.810628 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.822462 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.822649 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.837729 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-r6fqt"] Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.862226 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-config-data\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.862627 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.862712 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-scripts\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.862731 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5n2\" (UniqueName: \"kubernetes.io/projected/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-kube-api-access-nm5n2\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.965073 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-scripts\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.965134 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm5n2\" (UniqueName: \"kubernetes.io/projected/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-kube-api-access-nm5n2\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.965201 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-config-data\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.965259 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.979831 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-config-data\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.985343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:41 crc kubenswrapper[4780]: I1205 08:20:41.994438 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-scripts\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.009606 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.011456 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.024300 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.030688 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm5n2\" (UniqueName: \"kubernetes.io/projected/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-kube-api-access-nm5n2\") pod \"nova-cell0-cell-mapping-r6fqt\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.065424 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.085096 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-config-data\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.085171 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.085192 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmldm\" (UniqueName: \"kubernetes.io/projected/11604acc-f3dc-4983-90c9-6283ea5814b5-kube-api-access-kmldm\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.085214 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11604acc-f3dc-4983-90c9-6283ea5814b5-logs\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.181479 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.186765 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmldm\" (UniqueName: \"kubernetes.io/projected/11604acc-f3dc-4983-90c9-6283ea5814b5-kube-api-access-kmldm\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.186820 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11604acc-f3dc-4983-90c9-6283ea5814b5-logs\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.187007 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-config-data\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.187091 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.189723 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11604acc-f3dc-4983-90c9-6283ea5814b5-logs\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.190546 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.192476 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.193666 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.233795 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-config-data\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.236245 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.268939 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.269092 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmldm\" (UniqueName: \"kubernetes.io/projected/11604acc-f3dc-4983-90c9-6283ea5814b5-kube-api-access-kmldm\") pod \"nova-api-0\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.294595 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.296559 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.302085 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.312943 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.314140 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.318015 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.340587 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.357837 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.393808 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.393867 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-logs\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.394040 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-config-data\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.394065 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrj4x\" (UniqueName: \"kubernetes.io/projected/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-kube-api-access-zrj4x\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.401436 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54c966dbdf-p4mwh"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.403132 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.437392 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.439979 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54c966dbdf-p4mwh"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.495695 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-config-data\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.495743 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrj4x\" (UniqueName: \"kubernetes.io/projected/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-kube-api-access-zrj4x\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.495805 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgpb5\" (UniqueName: \"kubernetes.io/projected/f77e8cf8-05f2-440b-83c2-12916307d33e-kube-api-access-pgpb5\") pod \"nova-scheduler-0\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.495854 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.495893 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-logs\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.495931 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p54hs\" (UniqueName: \"kubernetes.io/projected/6e3410ac-49dd-4132-85d3-0dcea6218053-kube-api-access-p54hs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.495965 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.495990 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-config-data\") pod \"nova-scheduler-0\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.496021 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.496047 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.498306 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-logs\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.502252 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.505254 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-config-data\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.517318 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrj4x\" (UniqueName: \"kubernetes.io/projected/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-kube-api-access-zrj4x\") pod \"nova-metadata-0\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.597909 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.597955 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-config-data\") pod \"nova-scheduler-0\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.597986 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.598009 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.598061 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-config\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.598087 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrq4q\" (UniqueName: \"kubernetes.io/projected/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-kube-api-access-mrq4q\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.598148 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-sb\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.598178 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-dns-svc\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.598205 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgpb5\" (UniqueName: \"kubernetes.io/projected/f77e8cf8-05f2-440b-83c2-12916307d33e-kube-api-access-pgpb5\") pod \"nova-scheduler-0\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.598224 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-nb\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.598267 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p54hs\" (UniqueName: \"kubernetes.io/projected/6e3410ac-49dd-4132-85d3-0dcea6218053-kube-api-access-p54hs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.602103 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.602290 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.606631 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.607115 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-config-data\") pod \"nova-scheduler-0\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.618685 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p54hs\" (UniqueName: \"kubernetes.io/projected/6e3410ac-49dd-4132-85d3-0dcea6218053-kube-api-access-p54hs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.621550 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgpb5\" (UniqueName: \"kubernetes.io/projected/f77e8cf8-05f2-440b-83c2-12916307d33e-kube-api-access-pgpb5\") pod \"nova-scheduler-0\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.633264 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.665190 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.675532 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.702306 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-nb\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.702515 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-config\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.702545 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrq4q\" (UniqueName: \"kubernetes.io/projected/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-kube-api-access-mrq4q\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.702635 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-sb\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.702669 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-dns-svc\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.703709 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-dns-svc\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.703738 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-config\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.704432 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-nb\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.704714 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-sb\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.732893 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrq4q\" (UniqueName: \"kubernetes.io/projected/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-kube-api-access-mrq4q\") pod \"dnsmasq-dns-54c966dbdf-p4mwh\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.733559 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.852667 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-r6fqt"] Dec 05 08:20:42 crc kubenswrapper[4780]: I1205 08:20:42.938763 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r6fqt" event={"ID":"12be9160-11f9-4c9a-a72f-dc6f26efa7eb","Type":"ContainerStarted","Data":"7b9d4b5249ef4607f321ed25ec8d0af26aa78fc710eae6c642a4e9a29712d28f"} Dec 05 08:20:43 crc kubenswrapper[4780]: W1205 08:20:43.009133 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff01c3bd_eeb7_4336_8c20_b1bbc532ad62.slice/crio-7710448f658556c0698acceeb86e6cfcb5f03aa574476167fd3205d3cf5bf1a8 WatchSource:0}: Error finding container 7710448f658556c0698acceeb86e6cfcb5f03aa574476167fd3205d3cf5bf1a8: Status 404 returned error can't find the container with id 7710448f658556c0698acceeb86e6cfcb5f03aa574476167fd3205d3cf5bf1a8 Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.026759 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c5525"] Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.038906 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.042072 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.042275 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.048766 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c5525"] Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.063529 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.077929 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.218552 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-config-data\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.219154 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.219186 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-scripts\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.219250 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8hp6\" (UniqueName: \"kubernetes.io/projected/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-kube-api-access-z8hp6\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.322158 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.322217 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-scripts\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.322274 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8hp6\" (UniqueName: \"kubernetes.io/projected/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-kube-api-access-z8hp6\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.322370 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-config-data\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.328725 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-config-data\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.335665 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-scripts\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.336374 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.338840 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8hp6\" (UniqueName: \"kubernetes.io/projected/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-kube-api-access-z8hp6\") pod \"nova-cell1-conductor-db-sync-c5525\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.387317 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.397971 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:20:43 crc kubenswrapper[4780]: W1205 08:20:43.420697 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e3410ac_49dd_4132_85d3_0dcea6218053.slice/crio-42aeda67e2957cbbaad2df40979736b7aff9bf3ae12c73081faa6d5108fba47e WatchSource:0}: Error finding container 42aeda67e2957cbbaad2df40979736b7aff9bf3ae12c73081faa6d5108fba47e: Status 404 returned error can't find the container with id 42aeda67e2957cbbaad2df40979736b7aff9bf3ae12c73081faa6d5108fba47e Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.427060 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.510707 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54c966dbdf-p4mwh"] Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.958114 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r6fqt" event={"ID":"12be9160-11f9-4c9a-a72f-dc6f26efa7eb","Type":"ContainerStarted","Data":"5ca44b429f5ced20a845c94e49194585c0844cf50839e9153ed0cb089c8f4e6c"} Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.963649 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62","Type":"ContainerStarted","Data":"7710448f658556c0698acceeb86e6cfcb5f03aa574476167fd3205d3cf5bf1a8"} Dec 05 08:20:43 crc kubenswrapper[4780]: W1205 08:20:43.993679 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d3b1df7_f393_4984_9b7e_a1ceedf8f8da.slice/crio-460708ceffd20feecc9058745d95f7d325946d29885c7ad8789d6ee1cdac3d16 WatchSource:0}: Error finding container 460708ceffd20feecc9058745d95f7d325946d29885c7ad8789d6ee1cdac3d16: Status 404 returned error can't find the container with id 460708ceffd20feecc9058745d95f7d325946d29885c7ad8789d6ee1cdac3d16 Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.994231 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c5525"] Dec 05 08:20:43 crc kubenswrapper[4780]: I1205 08:20:43.995166 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e3410ac-49dd-4132-85d3-0dcea6218053","Type":"ContainerStarted","Data":"42aeda67e2957cbbaad2df40979736b7aff9bf3ae12c73081faa6d5108fba47e"} Dec 05 08:20:44 crc kubenswrapper[4780]: I1205 08:20:44.008555 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11604acc-f3dc-4983-90c9-6283ea5814b5","Type":"ContainerStarted","Data":"d14dc8c6785f6cc4031929ba9b7ad23b87b13474c4e8228ddca93cfc395d3ef2"} Dec 05 08:20:44 crc kubenswrapper[4780]: I1205 08:20:44.012385 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-r6fqt" podStartSLOduration=3.012358197 podStartE2EDuration="3.012358197s" podCreationTimestamp="2025-12-05 08:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:43.976217544 +0000 UTC m=+5678.045733896" watchObservedRunningTime="2025-12-05 08:20:44.012358197 +0000 UTC m=+5678.081874549" Dec 05 08:20:44 crc kubenswrapper[4780]: I1205 08:20:44.028028 4780 generic.go:334] "Generic (PLEG): container finished" podID="cf57fe22-8606-4ec1-a301-1a33d1d3cb22" containerID="1ea666c5e9a65ee5ae5ab44321ef84bb0f5e06b19c041a49b6a44c86f8193dfd" exitCode=0 Dec 05 08:20:44 crc kubenswrapper[4780]: I1205 08:20:44.029399 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" event={"ID":"cf57fe22-8606-4ec1-a301-1a33d1d3cb22","Type":"ContainerDied","Data":"1ea666c5e9a65ee5ae5ab44321ef84bb0f5e06b19c041a49b6a44c86f8193dfd"} Dec 05 08:20:44 crc kubenswrapper[4780]: I1205 08:20:44.029453 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" event={"ID":"cf57fe22-8606-4ec1-a301-1a33d1d3cb22","Type":"ContainerStarted","Data":"58b990357ab15c28f9ce53ddbe0a4dea6d54bce6df3c6ded919662ccce839cba"} Dec 05 08:20:44 crc kubenswrapper[4780]: I1205 08:20:44.037762 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f77e8cf8-05f2-440b-83c2-12916307d33e","Type":"ContainerStarted","Data":"99b3da8e3e182a77246807dfd7eeba7f4512577d664d5c2f03f3c567b6126bbc"} Dec 05 08:20:45 crc kubenswrapper[4780]: I1205 08:20:45.060220 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" event={"ID":"cf57fe22-8606-4ec1-a301-1a33d1d3cb22","Type":"ContainerStarted","Data":"d930fc3f58e1d4756fd2e44548f493a5f83e01dfac69ed8690360afdf73d00f0"} Dec 05 08:20:45 crc kubenswrapper[4780]: I1205 08:20:45.060910 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:45 crc kubenswrapper[4780]: I1205 08:20:45.062575 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c5525" event={"ID":"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da","Type":"ContainerStarted","Data":"d8c1e52d05283ae6440a324bf8625d45fe33de7553f218eeda3e5ff1bdddb060"} Dec 05 08:20:45 crc kubenswrapper[4780]: I1205 08:20:45.062627 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c5525" event={"ID":"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da","Type":"ContainerStarted","Data":"460708ceffd20feecc9058745d95f7d325946d29885c7ad8789d6ee1cdac3d16"} Dec 05 08:20:45 crc kubenswrapper[4780]: I1205 08:20:45.104720 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" podStartSLOduration=3.104694757 podStartE2EDuration="3.104694757s" podCreationTimestamp="2025-12-05 08:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:45.093607856 +0000 UTC m=+5679.163124198" watchObservedRunningTime="2025-12-05 08:20:45.104694757 +0000 UTC m=+5679.174211089" Dec 05 08:20:45 crc kubenswrapper[4780]: I1205 08:20:45.119972 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-c5525" podStartSLOduration=3.119945882 podStartE2EDuration="3.119945882s" podCreationTimestamp="2025-12-05 08:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:45.1165231 +0000 UTC m=+5679.186039442" watchObservedRunningTime="2025-12-05 08:20:45.119945882 +0000 UTC m=+5679.189462214" Dec 05 08:20:46 crc kubenswrapper[4780]: I1205 08:20:46.558265 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:46 crc kubenswrapper[4780]: I1205 08:20:46.579752 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.089560 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f77e8cf8-05f2-440b-83c2-12916307d33e","Type":"ContainerStarted","Data":"6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74"} Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.092368 4780 generic.go:334] "Generic (PLEG): container finished" podID="1d3b1df7-f393-4984-9b7e-a1ceedf8f8da" containerID="d8c1e52d05283ae6440a324bf8625d45fe33de7553f218eeda3e5ff1bdddb060" exitCode=0 Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.092466 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c5525" event={"ID":"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da","Type":"ContainerDied","Data":"d8c1e52d05283ae6440a324bf8625d45fe33de7553f218eeda3e5ff1bdddb060"} Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.095389 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62","Type":"ContainerStarted","Data":"8ab0522b7cd25c3bb24eee261ef85f583e3baf024d17c0a7153bd90683b2545f"} Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.095475 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62","Type":"ContainerStarted","Data":"6e0f71ff785779a3996787755a29f3a9cf3d0c21cb0080744469c298e2e3fb05"} Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.095506 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" containerName="nova-metadata-log" containerID="cri-o://6e0f71ff785779a3996787755a29f3a9cf3d0c21cb0080744469c298e2e3fb05" gracePeriod=30 Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.095604 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" containerName="nova-metadata-metadata" containerID="cri-o://8ab0522b7cd25c3bb24eee261ef85f583e3baf024d17c0a7153bd90683b2545f" gracePeriod=30 Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.106354 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e3410ac-49dd-4132-85d3-0dcea6218053","Type":"ContainerStarted","Data":"5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc"} Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.106469 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6e3410ac-49dd-4132-85d3-0dcea6218053" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc" gracePeriod=30 Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.115042 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11604acc-f3dc-4983-90c9-6283ea5814b5","Type":"ContainerStarted","Data":"983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2"} Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.115187 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11604acc-f3dc-4983-90c9-6283ea5814b5","Type":"ContainerStarted","Data":"5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765"} Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.121996 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.720115726 podStartE2EDuration="6.121978114s" podCreationTimestamp="2025-12-05 08:20:42 +0000 UTC" firstStartedPulling="2025-12-05 08:20:43.419048739 +0000 UTC m=+5677.488565071" lastFinishedPulling="2025-12-05 08:20:46.820911117 +0000 UTC m=+5680.890427459" observedRunningTime="2025-12-05 08:20:48.114565543 +0000 UTC m=+5682.184081885" watchObservedRunningTime="2025-12-05 08:20:48.121978114 +0000 UTC m=+5682.191494436" Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.197657 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.8008581230000003 podStartE2EDuration="6.197627862s" podCreationTimestamp="2025-12-05 08:20:42 +0000 UTC" firstStartedPulling="2025-12-05 08:20:43.426862552 +0000 UTC m=+5677.496378884" lastFinishedPulling="2025-12-05 08:20:46.823632291 +0000 UTC m=+5680.893148623" observedRunningTime="2025-12-05 08:20:48.170946126 +0000 UTC m=+5682.240462458" watchObservedRunningTime="2025-12-05 08:20:48.197627862 +0000 UTC m=+5682.267144194" Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.213513 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.454624517 podStartE2EDuration="7.213489894s" podCreationTimestamp="2025-12-05 08:20:41 +0000 UTC" firstStartedPulling="2025-12-05 08:20:43.066584713 +0000 UTC m=+5677.136101045" lastFinishedPulling="2025-12-05 08:20:46.82545009 +0000 UTC m=+5680.894966422" observedRunningTime="2025-12-05 08:20:48.209281599 +0000 UTC m=+5682.278797931" watchObservedRunningTime="2025-12-05 08:20:48.213489894 +0000 UTC m=+5682.283006226" Dec 05 08:20:48 crc kubenswrapper[4780]: I1205 08:20:48.215709 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.408728358 podStartE2EDuration="6.215703233s" podCreationTimestamp="2025-12-05 08:20:42 +0000 UTC" firstStartedPulling="2025-12-05 08:20:43.014726123 +0000 UTC m=+5677.084242465" lastFinishedPulling="2025-12-05 08:20:46.821700998 +0000 UTC m=+5680.891217340" observedRunningTime="2025-12-05 08:20:48.191996948 +0000 UTC m=+5682.261513300" watchObservedRunningTime="2025-12-05 08:20:48.215703233 +0000 UTC m=+5682.285219565" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.124522 4780 generic.go:334] "Generic (PLEG): container finished" podID="12be9160-11f9-4c9a-a72f-dc6f26efa7eb" containerID="5ca44b429f5ced20a845c94e49194585c0844cf50839e9153ed0cb089c8f4e6c" exitCode=0 Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.124626 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r6fqt" event={"ID":"12be9160-11f9-4c9a-a72f-dc6f26efa7eb","Type":"ContainerDied","Data":"5ca44b429f5ced20a845c94e49194585c0844cf50839e9153ed0cb089c8f4e6c"} Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.126659 4780 generic.go:334] "Generic (PLEG): container finished" podID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" containerID="8ab0522b7cd25c3bb24eee261ef85f583e3baf024d17c0a7153bd90683b2545f" exitCode=0 Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.126676 4780 generic.go:334] "Generic (PLEG): container finished" podID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" containerID="6e0f71ff785779a3996787755a29f3a9cf3d0c21cb0080744469c298e2e3fb05" exitCode=143 Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.126699 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62","Type":"ContainerDied","Data":"8ab0522b7cd25c3bb24eee261ef85f583e3baf024d17c0a7153bd90683b2545f"} Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.126746 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62","Type":"ContainerDied","Data":"6e0f71ff785779a3996787755a29f3a9cf3d0c21cb0080744469c298e2e3fb05"} Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.126756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62","Type":"ContainerDied","Data":"7710448f658556c0698acceeb86e6cfcb5f03aa574476167fd3205d3cf5bf1a8"} Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.126765 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7710448f658556c0698acceeb86e6cfcb5f03aa574476167fd3205d3cf5bf1a8" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.202553 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.370959 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-combined-ca-bundle\") pod \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.371162 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-logs\") pod \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.371246 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrj4x\" (UniqueName: \"kubernetes.io/projected/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-kube-api-access-zrj4x\") pod \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.371369 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-config-data\") pod \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\" (UID: \"ff01c3bd-eeb7-4336-8c20-b1bbc532ad62\") " Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.371655 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-logs" (OuterVolumeSpecName: "logs") pod "ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" (UID: "ff01c3bd-eeb7-4336-8c20-b1bbc532ad62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.372306 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.377041 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-kube-api-access-zrj4x" (OuterVolumeSpecName: "kube-api-access-zrj4x") pod "ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" (UID: "ff01c3bd-eeb7-4336-8c20-b1bbc532ad62"). InnerVolumeSpecName "kube-api-access-zrj4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.408494 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" (UID: "ff01c3bd-eeb7-4336-8c20-b1bbc532ad62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.410665 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-config-data" (OuterVolumeSpecName: "config-data") pod "ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" (UID: "ff01c3bd-eeb7-4336-8c20-b1bbc532ad62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.476506 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrj4x\" (UniqueName: \"kubernetes.io/projected/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-kube-api-access-zrj4x\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.476573 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.476586 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.482041 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.582961 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-config-data\") pod \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.583056 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-combined-ca-bundle\") pod \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.583099 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-scripts\") pod \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.583141 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8hp6\" (UniqueName: \"kubernetes.io/projected/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-kube-api-access-z8hp6\") pod \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\" (UID: \"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da\") " Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.586669 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-kube-api-access-z8hp6" (OuterVolumeSpecName: "kube-api-access-z8hp6") pod "1d3b1df7-f393-4984-9b7e-a1ceedf8f8da" (UID: "1d3b1df7-f393-4984-9b7e-a1ceedf8f8da"). InnerVolumeSpecName "kube-api-access-z8hp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.595402 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-scripts" (OuterVolumeSpecName: "scripts") pod "1d3b1df7-f393-4984-9b7e-a1ceedf8f8da" (UID: "1d3b1df7-f393-4984-9b7e-a1ceedf8f8da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.609064 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d3b1df7-f393-4984-9b7e-a1ceedf8f8da" (UID: "1d3b1df7-f393-4984-9b7e-a1ceedf8f8da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.613541 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-config-data" (OuterVolumeSpecName: "config-data") pod "1d3b1df7-f393-4984-9b7e-a1ceedf8f8da" (UID: "1d3b1df7-f393-4984-9b7e-a1ceedf8f8da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.685862 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.685915 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8hp6\" (UniqueName: \"kubernetes.io/projected/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-kube-api-access-z8hp6\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.685928 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:49 crc kubenswrapper[4780]: I1205 08:20:49.685937 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.138659 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c5525" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.138656 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.154645 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c5525" event={"ID":"1d3b1df7-f393-4984-9b7e-a1ceedf8f8da","Type":"ContainerDied","Data":"460708ceffd20feecc9058745d95f7d325946d29885c7ad8789d6ee1cdac3d16"} Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.154700 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="460708ceffd20feecc9058745d95f7d325946d29885c7ad8789d6ee1cdac3d16" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.211080 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.222197 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.241295 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 08:20:50 crc kubenswrapper[4780]: E1205 08:20:50.241801 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" containerName="nova-metadata-log" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.241817 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" containerName="nova-metadata-log" Dec 05 08:20:50 crc kubenswrapper[4780]: E1205 08:20:50.241860 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" containerName="nova-metadata-metadata" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.241869 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" containerName="nova-metadata-metadata" Dec 05 08:20:50 crc kubenswrapper[4780]: E1205 08:20:50.241914 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3b1df7-f393-4984-9b7e-a1ceedf8f8da" containerName="nova-cell1-conductor-db-sync" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.241923 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3b1df7-f393-4984-9b7e-a1ceedf8f8da" containerName="nova-cell1-conductor-db-sync" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.242128 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" containerName="nova-metadata-log" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.242144 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3b1df7-f393-4984-9b7e-a1ceedf8f8da" containerName="nova-cell1-conductor-db-sync" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.242169 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" containerName="nova-metadata-metadata" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.242988 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.251167 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.261591 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.264111 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.269082 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.269861 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.300168 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.311034 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.400158 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8btwp\" (UniqueName: \"kubernetes.io/projected/992707da-fdf7-4f29-b044-5001e8179030-kube-api-access-8btwp\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.400229 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992707da-fdf7-4f29-b044-5001e8179030-logs\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.400253 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-config-data\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.400281 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.400305 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.400344 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.400401 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh88p\" (UniqueName: \"kubernetes.io/projected/506f828e-216f-456c-91c9-ee53f5b4056e-kube-api-access-vh88p\") pod \"nova-cell1-conductor-0\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.400446 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.501742 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh88p\" (UniqueName: \"kubernetes.io/projected/506f828e-216f-456c-91c9-ee53f5b4056e-kube-api-access-vh88p\") pod \"nova-cell1-conductor-0\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.502413 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.502581 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8btwp\" (UniqueName: \"kubernetes.io/projected/992707da-fdf7-4f29-b044-5001e8179030-kube-api-access-8btwp\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.502672 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992707da-fdf7-4f29-b044-5001e8179030-logs\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.502748 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-config-data\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.502824 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.502916 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.503014 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.503250 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992707da-fdf7-4f29-b044-5001e8179030-logs\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.509122 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.509291 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.509713 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.510389 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-config-data\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.511245 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.518654 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh88p\" (UniqueName: \"kubernetes.io/projected/506f828e-216f-456c-91c9-ee53f5b4056e-kube-api-access-vh88p\") pod \"nova-cell1-conductor-0\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.522211 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8btwp\" (UniqueName: \"kubernetes.io/projected/992707da-fdf7-4f29-b044-5001e8179030-kube-api-access-8btwp\") pod \"nova-metadata-0\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.584765 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.584774 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.599320 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.707377 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-config-data\") pod \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.707473 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm5n2\" (UniqueName: \"kubernetes.io/projected/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-kube-api-access-nm5n2\") pod \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.707532 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-combined-ca-bundle\") pod \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.707742 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-scripts\") pod \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\" (UID: \"12be9160-11f9-4c9a-a72f-dc6f26efa7eb\") " Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.712115 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-scripts" (OuterVolumeSpecName: "scripts") pod "12be9160-11f9-4c9a-a72f-dc6f26efa7eb" (UID: "12be9160-11f9-4c9a-a72f-dc6f26efa7eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.714103 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-kube-api-access-nm5n2" (OuterVolumeSpecName: "kube-api-access-nm5n2") pod "12be9160-11f9-4c9a-a72f-dc6f26efa7eb" (UID: "12be9160-11f9-4c9a-a72f-dc6f26efa7eb"). InnerVolumeSpecName "kube-api-access-nm5n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.739019 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-config-data" (OuterVolumeSpecName: "config-data") pod "12be9160-11f9-4c9a-a72f-dc6f26efa7eb" (UID: "12be9160-11f9-4c9a-a72f-dc6f26efa7eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.741668 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12be9160-11f9-4c9a-a72f-dc6f26efa7eb" (UID: "12be9160-11f9-4c9a-a72f-dc6f26efa7eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.810413 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm5n2\" (UniqueName: \"kubernetes.io/projected/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-kube-api-access-nm5n2\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.810666 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.810676 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:50 crc kubenswrapper[4780]: I1205 08:20:50.810684 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12be9160-11f9-4c9a-a72f-dc6f26efa7eb-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.055060 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:51 crc kubenswrapper[4780]: W1205 08:20:51.057907 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod992707da_fdf7_4f29_b044_5001e8179030.slice/crio-3170bf3c69b44af9b1d18c4811ad69efe1a18f54b9709c09d0f98cab13cb56b6 WatchSource:0}: Error finding container 3170bf3c69b44af9b1d18c4811ad69efe1a18f54b9709c09d0f98cab13cb56b6: Status 404 returned error can't find the container with id 3170bf3c69b44af9b1d18c4811ad69efe1a18f54b9709c09d0f98cab13cb56b6 Dec 05 08:20:51 crc kubenswrapper[4780]: W1205 08:20:51.127984 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod506f828e_216f_456c_91c9_ee53f5b4056e.slice/crio-32a758bb539b0d315f33050c46e5b6ce19a76811dee99741f6bae18e2f47b880 WatchSource:0}: Error finding container 32a758bb539b0d315f33050c46e5b6ce19a76811dee99741f6bae18e2f47b880: Status 404 returned error can't find the container with id 32a758bb539b0d315f33050c46e5b6ce19a76811dee99741f6bae18e2f47b880 Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.128243 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.207166 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r6fqt" Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.207182 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r6fqt" event={"ID":"12be9160-11f9-4c9a-a72f-dc6f26efa7eb","Type":"ContainerDied","Data":"7b9d4b5249ef4607f321ed25ec8d0af26aa78fc710eae6c642a4e9a29712d28f"} Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.215826 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b9d4b5249ef4607f321ed25ec8d0af26aa78fc710eae6c642a4e9a29712d28f" Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.254143 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"992707da-fdf7-4f29-b044-5001e8179030","Type":"ContainerStarted","Data":"3170bf3c69b44af9b1d18c4811ad69efe1a18f54b9709c09d0f98cab13cb56b6"} Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.281594 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"506f828e-216f-456c-91c9-ee53f5b4056e","Type":"ContainerStarted","Data":"32a758bb539b0d315f33050c46e5b6ce19a76811dee99741f6bae18e2f47b880"} Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.412927 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.413135 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11604acc-f3dc-4983-90c9-6283ea5814b5" containerName="nova-api-log" containerID="cri-o://5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765" gracePeriod=30 Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.413530 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11604acc-f3dc-4983-90c9-6283ea5814b5" containerName="nova-api-api" containerID="cri-o://983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2" gracePeriod=30 Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.425934 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.426337 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f77e8cf8-05f2-440b-83c2-12916307d33e" containerName="nova-scheduler-scheduler" containerID="cri-o://6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74" gracePeriod=30 Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.444257 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:51 crc kubenswrapper[4780]: I1205 08:20:51.941816 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.041295 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-config-data\") pod \"11604acc-f3dc-4983-90c9-6283ea5814b5\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.043577 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-combined-ca-bundle\") pod \"11604acc-f3dc-4983-90c9-6283ea5814b5\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.043664 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11604acc-f3dc-4983-90c9-6283ea5814b5-logs\") pod \"11604acc-f3dc-4983-90c9-6283ea5814b5\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.043685 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmldm\" (UniqueName: \"kubernetes.io/projected/11604acc-f3dc-4983-90c9-6283ea5814b5-kube-api-access-kmldm\") pod \"11604acc-f3dc-4983-90c9-6283ea5814b5\" (UID: \"11604acc-f3dc-4983-90c9-6283ea5814b5\") " Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.044577 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11604acc-f3dc-4983-90c9-6283ea5814b5-logs" (OuterVolumeSpecName: "logs") pod "11604acc-f3dc-4983-90c9-6283ea5814b5" (UID: "11604acc-f3dc-4983-90c9-6283ea5814b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.047534 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11604acc-f3dc-4983-90c9-6283ea5814b5-kube-api-access-kmldm" (OuterVolumeSpecName: "kube-api-access-kmldm") pod "11604acc-f3dc-4983-90c9-6283ea5814b5" (UID: "11604acc-f3dc-4983-90c9-6283ea5814b5"). InnerVolumeSpecName "kube-api-access-kmldm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.066256 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-config-data" (OuterVolumeSpecName: "config-data") pod "11604acc-f3dc-4983-90c9-6283ea5814b5" (UID: "11604acc-f3dc-4983-90c9-6283ea5814b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.068089 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11604acc-f3dc-4983-90c9-6283ea5814b5" (UID: "11604acc-f3dc-4983-90c9-6283ea5814b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.145582 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.145621 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11604acc-f3dc-4983-90c9-6283ea5814b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.145634 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11604acc-f3dc-4983-90c9-6283ea5814b5-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.145643 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmldm\" (UniqueName: \"kubernetes.io/projected/11604acc-f3dc-4983-90c9-6283ea5814b5-kube-api-access-kmldm\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.151805 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff01c3bd-eeb7-4336-8c20-b1bbc532ad62" path="/var/lib/kubelet/pods/ff01c3bd-eeb7-4336-8c20-b1bbc532ad62/volumes" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.290973 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"506f828e-216f-456c-91c9-ee53f5b4056e","Type":"ContainerStarted","Data":"baa23f6c618b5c56cb965da46d4cc43f3f8b682b8c857f697bed4822ddae3c1d"} Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.292092 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.293266 4780 generic.go:334] "Generic (PLEG): container finished" podID="11604acc-f3dc-4983-90c9-6283ea5814b5" containerID="983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2" exitCode=0 Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.293296 4780 generic.go:334] "Generic (PLEG): container finished" podID="11604acc-f3dc-4983-90c9-6283ea5814b5" containerID="5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765" exitCode=143 Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.293354 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.293344 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11604acc-f3dc-4983-90c9-6283ea5814b5","Type":"ContainerDied","Data":"983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2"} Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.293469 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11604acc-f3dc-4983-90c9-6283ea5814b5","Type":"ContainerDied","Data":"5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765"} Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.293480 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11604acc-f3dc-4983-90c9-6283ea5814b5","Type":"ContainerDied","Data":"d14dc8c6785f6cc4031929ba9b7ad23b87b13474c4e8228ddca93cfc395d3ef2"} Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.293496 4780 scope.go:117] "RemoveContainer" containerID="983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.295916 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"992707da-fdf7-4f29-b044-5001e8179030","Type":"ContainerStarted","Data":"6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6"} Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.295954 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"992707da-fdf7-4f29-b044-5001e8179030","Type":"ContainerStarted","Data":"a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6"} Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.295984 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="992707da-fdf7-4f29-b044-5001e8179030" containerName="nova-metadata-log" containerID="cri-o://a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6" gracePeriod=30 Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.296042 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="992707da-fdf7-4f29-b044-5001e8179030" containerName="nova-metadata-metadata" containerID="cri-o://6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6" gracePeriod=30 Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.317214 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.317191489 podStartE2EDuration="2.317191489s" podCreationTimestamp="2025-12-05 08:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:52.312229835 +0000 UTC m=+5686.381746167" watchObservedRunningTime="2025-12-05 08:20:52.317191489 +0000 UTC m=+5686.386707821" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.319632 4780 scope.go:117] "RemoveContainer" containerID="5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.339018 4780 scope.go:117] "RemoveContainer" containerID="983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2" Dec 05 08:20:52 crc kubenswrapper[4780]: E1205 08:20:52.339599 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2\": container with ID starting with 983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2 not found: ID does not exist" containerID="983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.339657 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2"} err="failed to get container status \"983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2\": rpc error: code = NotFound desc = could not find container \"983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2\": container with ID starting with 983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2 not found: ID does not exist" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.339695 4780 scope.go:117] "RemoveContainer" containerID="5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765" Dec 05 08:20:52 crc kubenswrapper[4780]: E1205 08:20:52.340606 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765\": container with ID starting with 5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765 not found: ID does not exist" containerID="5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.340638 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765"} err="failed to get container status \"5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765\": rpc error: code = NotFound desc = could not find container \"5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765\": container with ID starting with 5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765 not found: ID does not exist" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.340655 4780 scope.go:117] "RemoveContainer" containerID="983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.341097 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2"} err="failed to get container status \"983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2\": rpc error: code = NotFound desc = could not find container \"983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2\": container with ID starting with 983c4a4937ac4bea6e8ff20a793b4caf438dba8668c7369f03137c4720c2fbc2 not found: ID does not exist" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.341243 4780 scope.go:117] "RemoveContainer" containerID="5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.341637 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765"} err="failed to get container status \"5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765\": rpc error: code = NotFound desc = could not find container \"5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765\": container with ID starting with 5d920a853af693460604e44212085b6550b9c5ef166e399dbabeae108f886765 not found: ID does not exist" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.350401 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.350378172 podStartE2EDuration="2.350378172s" podCreationTimestamp="2025-12-05 08:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:52.336186466 +0000 UTC m=+5686.405702788" watchObservedRunningTime="2025-12-05 08:20:52.350378172 +0000 UTC m=+5686.419894504" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.362698 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.379459 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.389198 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 08:20:52 crc kubenswrapper[4780]: E1205 08:20:52.389631 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12be9160-11f9-4c9a-a72f-dc6f26efa7eb" containerName="nova-manage" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.389648 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="12be9160-11f9-4c9a-a72f-dc6f26efa7eb" containerName="nova-manage" Dec 05 08:20:52 crc kubenswrapper[4780]: E1205 08:20:52.389676 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11604acc-f3dc-4983-90c9-6283ea5814b5" containerName="nova-api-api" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.389682 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="11604acc-f3dc-4983-90c9-6283ea5814b5" containerName="nova-api-api" Dec 05 08:20:52 crc kubenswrapper[4780]: E1205 08:20:52.389708 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11604acc-f3dc-4983-90c9-6283ea5814b5" containerName="nova-api-log" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.389714 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="11604acc-f3dc-4983-90c9-6283ea5814b5" containerName="nova-api-log" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.389894 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="12be9160-11f9-4c9a-a72f-dc6f26efa7eb" containerName="nova-manage" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.389907 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="11604acc-f3dc-4983-90c9-6283ea5814b5" containerName="nova-api-api" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.389920 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="11604acc-f3dc-4983-90c9-6283ea5814b5" containerName="nova-api-log" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.390851 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.395347 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.445145 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.556194 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4gzk\" (UniqueName: \"kubernetes.io/projected/efe2c9ef-bff4-4c21-a592-b29594d0eb81-kube-api-access-z4gzk\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.556387 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe2c9ef-bff4-4c21-a592-b29594d0eb81-logs\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.556450 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-config-data\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.556478 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.659314 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4gzk\" (UniqueName: \"kubernetes.io/projected/efe2c9ef-bff4-4c21-a592-b29594d0eb81-kube-api-access-z4gzk\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.659765 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe2c9ef-bff4-4c21-a592-b29594d0eb81-logs\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.659815 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-config-data\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.659835 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.662687 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe2c9ef-bff4-4c21-a592-b29594d0eb81-logs\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.666171 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.666338 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-config-data\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.666425 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.684040 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.688717 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4gzk\" (UniqueName: \"kubernetes.io/projected/efe2c9ef-bff4-4c21-a592-b29594d0eb81-kube-api-access-z4gzk\") pod \"nova-api-0\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.735051 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.758625 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.828120 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.828232 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-749f85f4b9-zj89c"] Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.828787 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" podUID="23452946-1048-4f09-a637-3f2e3fa9af17" containerName="dnsmasq-dns" containerID="cri-o://ad7861383c538cf57d64e940aa48ff781dc39f76083750a28d2099729d7f7db9" gracePeriod=10 Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.965308 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-combined-ca-bundle\") pod \"992707da-fdf7-4f29-b044-5001e8179030\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.965349 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992707da-fdf7-4f29-b044-5001e8179030-logs\") pod \"992707da-fdf7-4f29-b044-5001e8179030\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.965429 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8btwp\" (UniqueName: \"kubernetes.io/projected/992707da-fdf7-4f29-b044-5001e8179030-kube-api-access-8btwp\") pod \"992707da-fdf7-4f29-b044-5001e8179030\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.965476 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-config-data\") pod \"992707da-fdf7-4f29-b044-5001e8179030\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.965537 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-nova-metadata-tls-certs\") pod \"992707da-fdf7-4f29-b044-5001e8179030\" (UID: \"992707da-fdf7-4f29-b044-5001e8179030\") " Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.969242 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/992707da-fdf7-4f29-b044-5001e8179030-logs" (OuterVolumeSpecName: "logs") pod "992707da-fdf7-4f29-b044-5001e8179030" (UID: "992707da-fdf7-4f29-b044-5001e8179030"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:20:52 crc kubenswrapper[4780]: I1205 08:20:52.970911 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992707da-fdf7-4f29-b044-5001e8179030-kube-api-access-8btwp" (OuterVolumeSpecName: "kube-api-access-8btwp") pod "992707da-fdf7-4f29-b044-5001e8179030" (UID: "992707da-fdf7-4f29-b044-5001e8179030"). InnerVolumeSpecName "kube-api-access-8btwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.004146 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "992707da-fdf7-4f29-b044-5001e8179030" (UID: "992707da-fdf7-4f29-b044-5001e8179030"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.013486 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-config-data" (OuterVolumeSpecName: "config-data") pod "992707da-fdf7-4f29-b044-5001e8179030" (UID: "992707da-fdf7-4f29-b044-5001e8179030"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.033311 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "992707da-fdf7-4f29-b044-5001e8179030" (UID: "992707da-fdf7-4f29-b044-5001e8179030"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.067439 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.067478 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/992707da-fdf7-4f29-b044-5001e8179030-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.067487 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8btwp\" (UniqueName: \"kubernetes.io/projected/992707da-fdf7-4f29-b044-5001e8179030-kube-api-access-8btwp\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.067497 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.067505 4780 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/992707da-fdf7-4f29-b044-5001e8179030-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.285831 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:20:53 crc kubenswrapper[4780]: W1205 08:20:53.289986 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefe2c9ef_bff4_4c21_a592_b29594d0eb81.slice/crio-6239e5190e022d937bb84ddb067e65204c23d3355429be35409d86c2722b12ce WatchSource:0}: Error finding container 6239e5190e022d937bb84ddb067e65204c23d3355429be35409d86c2722b12ce: Status 404 returned error can't find the container with id 6239e5190e022d937bb84ddb067e65204c23d3355429be35409d86c2722b12ce Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.326033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe2c9ef-bff4-4c21-a592-b29594d0eb81","Type":"ContainerStarted","Data":"6239e5190e022d937bb84ddb067e65204c23d3355429be35409d86c2722b12ce"} Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.346391 4780 generic.go:334] "Generic (PLEG): container finished" podID="23452946-1048-4f09-a637-3f2e3fa9af17" containerID="ad7861383c538cf57d64e940aa48ff781dc39f76083750a28d2099729d7f7db9" exitCode=0 Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.346474 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" event={"ID":"23452946-1048-4f09-a637-3f2e3fa9af17","Type":"ContainerDied","Data":"ad7861383c538cf57d64e940aa48ff781dc39f76083750a28d2099729d7f7db9"} Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.352538 4780 generic.go:334] "Generic (PLEG): container finished" podID="992707da-fdf7-4f29-b044-5001e8179030" containerID="6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6" exitCode=0 Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.352576 4780 generic.go:334] "Generic (PLEG): container finished" podID="992707da-fdf7-4f29-b044-5001e8179030" containerID="a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6" exitCode=143 Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.352839 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.353437 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"992707da-fdf7-4f29-b044-5001e8179030","Type":"ContainerDied","Data":"6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6"} Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.353462 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"992707da-fdf7-4f29-b044-5001e8179030","Type":"ContainerDied","Data":"a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6"} Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.353475 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"992707da-fdf7-4f29-b044-5001e8179030","Type":"ContainerDied","Data":"3170bf3c69b44af9b1d18c4811ad69efe1a18f54b9709c09d0f98cab13cb56b6"} Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.353493 4780 scope.go:117] "RemoveContainer" containerID="6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.482776 4780 scope.go:117] "RemoveContainer" containerID="a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.493563 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.504943 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.520916 4780 scope.go:117] "RemoveContainer" containerID="6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.521037 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:53 crc kubenswrapper[4780]: E1205 08:20:53.521468 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6\": container with ID starting with 6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6 not found: ID does not exist" containerID="6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.521502 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6"} err="failed to get container status \"6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6\": rpc error: code = NotFound desc = could not find container \"6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6\": container with ID starting with 6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6 not found: ID does not exist" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.521538 4780 scope.go:117] "RemoveContainer" containerID="a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6" Dec 05 08:20:53 crc kubenswrapper[4780]: E1205 08:20:53.525139 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6\": container with ID starting with a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6 not found: ID does not exist" containerID="a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.525181 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6"} err="failed to get container status \"a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6\": rpc error: code = NotFound desc = could not find container \"a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6\": container with ID starting with a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6 not found: ID does not exist" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.525210 4780 scope.go:117] "RemoveContainer" containerID="6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.529175 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6"} err="failed to get container status \"6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6\": rpc error: code = NotFound desc = could not find container \"6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6\": container with ID starting with 6dc6dbda7e7f3254de071a7971d4c7a142acd70ab9c12ea4115c60aea44c61d6 not found: ID does not exist" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.529220 4780 scope.go:117] "RemoveContainer" containerID="a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.530350 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6"} err="failed to get container status \"a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6\": rpc error: code = NotFound desc = could not find container \"a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6\": container with ID starting with a9462315a6fd4881c9bca45a9005c3f5f45d03013ae0d02f8fbca22aeeef01d6 not found: ID does not exist" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.539667 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:53 crc kubenswrapper[4780]: E1205 08:20:53.540081 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992707da-fdf7-4f29-b044-5001e8179030" containerName="nova-metadata-metadata" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.540098 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="992707da-fdf7-4f29-b044-5001e8179030" containerName="nova-metadata-metadata" Dec 05 08:20:53 crc kubenswrapper[4780]: E1205 08:20:53.540126 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992707da-fdf7-4f29-b044-5001e8179030" containerName="nova-metadata-log" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.540133 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="992707da-fdf7-4f29-b044-5001e8179030" containerName="nova-metadata-log" Dec 05 08:20:53 crc kubenswrapper[4780]: E1205 08:20:53.540148 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23452946-1048-4f09-a637-3f2e3fa9af17" containerName="dnsmasq-dns" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.540154 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="23452946-1048-4f09-a637-3f2e3fa9af17" containerName="dnsmasq-dns" Dec 05 08:20:53 crc kubenswrapper[4780]: E1205 08:20:53.540165 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23452946-1048-4f09-a637-3f2e3fa9af17" containerName="init" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.540170 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="23452946-1048-4f09-a637-3f2e3fa9af17" containerName="init" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.540336 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="992707da-fdf7-4f29-b044-5001e8179030" containerName="nova-metadata-log" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.540349 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="992707da-fdf7-4f29-b044-5001e8179030" containerName="nova-metadata-metadata" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.540412 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="23452946-1048-4f09-a637-3f2e3fa9af17" containerName="dnsmasq-dns" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.541307 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.546254 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.546426 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.567275 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.579726 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5cnx\" (UniqueName: \"kubernetes.io/projected/23452946-1048-4f09-a637-3f2e3fa9af17-kube-api-access-f5cnx\") pod \"23452946-1048-4f09-a637-3f2e3fa9af17\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.579767 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-dns-svc\") pod \"23452946-1048-4f09-a637-3f2e3fa9af17\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.579818 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-nb\") pod \"23452946-1048-4f09-a637-3f2e3fa9af17\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.579869 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-config\") pod \"23452946-1048-4f09-a637-3f2e3fa9af17\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.579959 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-sb\") pod \"23452946-1048-4f09-a637-3f2e3fa9af17\" (UID: \"23452946-1048-4f09-a637-3f2e3fa9af17\") " Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.594346 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23452946-1048-4f09-a637-3f2e3fa9af17-kube-api-access-f5cnx" (OuterVolumeSpecName: "kube-api-access-f5cnx") pod "23452946-1048-4f09-a637-3f2e3fa9af17" (UID: "23452946-1048-4f09-a637-3f2e3fa9af17"). InnerVolumeSpecName "kube-api-access-f5cnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.634066 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23452946-1048-4f09-a637-3f2e3fa9af17" (UID: "23452946-1048-4f09-a637-3f2e3fa9af17"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.666197 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23452946-1048-4f09-a637-3f2e3fa9af17" (UID: "23452946-1048-4f09-a637-3f2e3fa9af17"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.672279 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23452946-1048-4f09-a637-3f2e3fa9af17" (UID: "23452946-1048-4f09-a637-3f2e3fa9af17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.673793 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-config" (OuterVolumeSpecName: "config") pod "23452946-1048-4f09-a637-3f2e3fa9af17" (UID: "23452946-1048-4f09-a637-3f2e3fa9af17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.683059 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db41281b-80f8-40ef-bc48-f71082c6dd00-logs\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.683189 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwt46\" (UniqueName: \"kubernetes.io/projected/db41281b-80f8-40ef-bc48-f71082c6dd00-kube-api-access-xwt46\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.683250 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.683284 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-config-data\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.683348 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.683769 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.683817 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.683834 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.683846 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5cnx\" (UniqueName: \"kubernetes.io/projected/23452946-1048-4f09-a637-3f2e3fa9af17-kube-api-access-f5cnx\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.683856 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23452946-1048-4f09-a637-3f2e3fa9af17-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.785735 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db41281b-80f8-40ef-bc48-f71082c6dd00-logs\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.785814 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwt46\" (UniqueName: \"kubernetes.io/projected/db41281b-80f8-40ef-bc48-f71082c6dd00-kube-api-access-xwt46\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.785866 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.785914 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-config-data\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.785983 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.787514 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db41281b-80f8-40ef-bc48-f71082c6dd00-logs\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.790948 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-config-data\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.794415 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.803586 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.807918 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwt46\" (UniqueName: \"kubernetes.io/projected/db41281b-80f8-40ef-bc48-f71082c6dd00-kube-api-access-xwt46\") pod \"nova-metadata-0\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " pod="openstack/nova-metadata-0" Dec 05 08:20:53 crc kubenswrapper[4780]: I1205 08:20:53.949717 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.149340 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11604acc-f3dc-4983-90c9-6283ea5814b5" path="/var/lib/kubelet/pods/11604acc-f3dc-4983-90c9-6283ea5814b5/volumes" Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.150297 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992707da-fdf7-4f29-b044-5001e8179030" path="/var/lib/kubelet/pods/992707da-fdf7-4f29-b044-5001e8179030/volumes" Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.363844 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe2c9ef-bff4-4c21-a592-b29594d0eb81","Type":"ContainerStarted","Data":"78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884"} Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.363966 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe2c9ef-bff4-4c21-a592-b29594d0eb81","Type":"ContainerStarted","Data":"454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183"} Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.366075 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.366104 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749f85f4b9-zj89c" event={"ID":"23452946-1048-4f09-a637-3f2e3fa9af17","Type":"ContainerDied","Data":"10d78c785d127dab3c1706fc1fa89434563e0c72ed818b0c2d705ddb83213817"} Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.366177 4780 scope.go:117] "RemoveContainer" containerID="ad7861383c538cf57d64e940aa48ff781dc39f76083750a28d2099729d7f7db9" Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.386445 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3864088900000002 podStartE2EDuration="2.38640889s" podCreationTimestamp="2025-12-05 08:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:54.384855488 +0000 UTC m=+5688.454371820" watchObservedRunningTime="2025-12-05 08:20:54.38640889 +0000 UTC m=+5688.455925222" Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.395692 4780 scope.go:117] "RemoveContainer" containerID="87ce1694f6d9d220015b27a238eacc181c5b49d2c75bbf9e835ec4340700dd58" Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.432587 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-749f85f4b9-zj89c"] Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.441309 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-749f85f4b9-zj89c"] Dec 05 08:20:54 crc kubenswrapper[4780]: W1205 08:20:54.565785 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb41281b_80f8_40ef_bc48_f71082c6dd00.slice/crio-6711b128e0e3810c7979e433776cf8ee6db0e4e7a58d08127ade121392324bdd WatchSource:0}: Error finding container 6711b128e0e3810c7979e433776cf8ee6db0e4e7a58d08127ade121392324bdd: Status 404 returned error can't find the container with id 6711b128e0e3810c7979e433776cf8ee6db0e4e7a58d08127ade121392324bdd Dec 05 08:20:54 crc kubenswrapper[4780]: I1205 08:20:54.566684 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:20:55 crc kubenswrapper[4780]: I1205 08:20:55.382012 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"db41281b-80f8-40ef-bc48-f71082c6dd00","Type":"ContainerStarted","Data":"594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1"} Dec 05 08:20:55 crc kubenswrapper[4780]: I1205 08:20:55.382341 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"db41281b-80f8-40ef-bc48-f71082c6dd00","Type":"ContainerStarted","Data":"a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86"} Dec 05 08:20:55 crc kubenswrapper[4780]: I1205 08:20:55.382353 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"db41281b-80f8-40ef-bc48-f71082c6dd00","Type":"ContainerStarted","Data":"6711b128e0e3810c7979e433776cf8ee6db0e4e7a58d08127ade121392324bdd"} Dec 05 08:20:55 crc kubenswrapper[4780]: I1205 08:20:55.406828 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.406808804 podStartE2EDuration="2.406808804s" podCreationTimestamp="2025-12-05 08:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:20:55.396628367 +0000 UTC m=+5689.466144699" watchObservedRunningTime="2025-12-05 08:20:55.406808804 +0000 UTC m=+5689.476325136" Dec 05 08:20:56 crc kubenswrapper[4780]: I1205 08:20:56.151961 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23452946-1048-4f09-a637-3f2e3fa9af17" path="/var/lib/kubelet/pods/23452946-1048-4f09-a637-3f2e3fa9af17/volumes" Dec 05 08:20:58 crc kubenswrapper[4780]: I1205 08:20:58.950827 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 08:20:58 crc kubenswrapper[4780]: I1205 08:20:58.951324 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 08:21:00 crc kubenswrapper[4780]: I1205 08:21:00.610633 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 05 08:21:02 crc kubenswrapper[4780]: I1205 08:21:02.760054 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 08:21:02 crc kubenswrapper[4780]: I1205 08:21:02.760423 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 08:21:03 crc kubenswrapper[4780]: I1205 08:21:03.842080 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:03 crc kubenswrapper[4780]: I1205 08:21:03.842128 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:03 crc kubenswrapper[4780]: I1205 08:21:03.952425 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 08:21:03 crc kubenswrapper[4780]: I1205 08:21:03.952467 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 08:21:04 crc kubenswrapper[4780]: I1205 08:21:04.961031 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.85:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:04 crc kubenswrapper[4780]: I1205 08:21:04.961119 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.85:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:13 crc kubenswrapper[4780]: I1205 08:21:13.843348 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:13 crc kubenswrapper[4780]: I1205 08:21:13.843494 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:14 crc kubenswrapper[4780]: I1205 08:21:14.959124 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.85:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:14 crc kubenswrapper[4780]: I1205 08:21:14.959125 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.85:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.536432 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.604724 4780 generic.go:334] "Generic (PLEG): container finished" podID="6e3410ac-49dd-4132-85d3-0dcea6218053" containerID="5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc" exitCode=137 Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.604766 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.604769 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e3410ac-49dd-4132-85d3-0dcea6218053","Type":"ContainerDied","Data":"5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc"} Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.604936 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e3410ac-49dd-4132-85d3-0dcea6218053","Type":"ContainerDied","Data":"42aeda67e2957cbbaad2df40979736b7aff9bf3ae12c73081faa6d5108fba47e"} Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.604969 4780 scope.go:117] "RemoveContainer" containerID="5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.635083 4780 scope.go:117] "RemoveContainer" containerID="5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc" Dec 05 08:21:18 crc kubenswrapper[4780]: E1205 08:21:18.635562 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc\": container with ID starting with 5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc not found: ID does not exist" containerID="5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.635610 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc"} err="failed to get container status \"5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc\": rpc error: code = NotFound desc = could not find container \"5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc\": container with ID starting with 5f67bd553f41efd7956aed04fc9ab9347d9fb21cf73a8ff39b473394ae72e6fc not found: ID does not exist" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.688112 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-config-data\") pod \"6e3410ac-49dd-4132-85d3-0dcea6218053\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.688209 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p54hs\" (UniqueName: \"kubernetes.io/projected/6e3410ac-49dd-4132-85d3-0dcea6218053-kube-api-access-p54hs\") pod \"6e3410ac-49dd-4132-85d3-0dcea6218053\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.688394 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-combined-ca-bundle\") pod \"6e3410ac-49dd-4132-85d3-0dcea6218053\" (UID: \"6e3410ac-49dd-4132-85d3-0dcea6218053\") " Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.693590 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3410ac-49dd-4132-85d3-0dcea6218053-kube-api-access-p54hs" (OuterVolumeSpecName: "kube-api-access-p54hs") pod "6e3410ac-49dd-4132-85d3-0dcea6218053" (UID: "6e3410ac-49dd-4132-85d3-0dcea6218053"). InnerVolumeSpecName "kube-api-access-p54hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.717549 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-config-data" (OuterVolumeSpecName: "config-data") pod "6e3410ac-49dd-4132-85d3-0dcea6218053" (UID: "6e3410ac-49dd-4132-85d3-0dcea6218053"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.723137 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e3410ac-49dd-4132-85d3-0dcea6218053" (UID: "6e3410ac-49dd-4132-85d3-0dcea6218053"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.791044 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.791080 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p54hs\" (UniqueName: \"kubernetes.io/projected/6e3410ac-49dd-4132-85d3-0dcea6218053-kube-api-access-p54hs\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.791090 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3410ac-49dd-4132-85d3-0dcea6218053-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.938818 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.953031 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.962059 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 08:21:18 crc kubenswrapper[4780]: E1205 08:21:18.962497 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3410ac-49dd-4132-85d3-0dcea6218053" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.962519 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3410ac-49dd-4132-85d3-0dcea6218053" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.962721 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3410ac-49dd-4132-85d3-0dcea6218053" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.963396 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.965586 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.965731 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.967646 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 05 08:21:18 crc kubenswrapper[4780]: I1205 08:21:18.972318 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.096638 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj9mx\" (UniqueName: \"kubernetes.io/projected/e85de255-98ca-4e1b-8a26-96597ae078aa-kube-api-access-rj9mx\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.096746 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.096779 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.096847 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.097079 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.199098 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.199219 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.199292 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj9mx\" (UniqueName: \"kubernetes.io/projected/e85de255-98ca-4e1b-8a26-96597ae078aa-kube-api-access-rj9mx\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.199419 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.199446 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.203395 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.203578 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.203690 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.205414 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e85de255-98ca-4e1b-8a26-96597ae078aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.216427 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj9mx\" (UniqueName: \"kubernetes.io/projected/e85de255-98ca-4e1b-8a26-96597ae078aa-kube-api-access-rj9mx\") pod \"nova-cell1-novncproxy-0\" (UID: \"e85de255-98ca-4e1b-8a26-96597ae078aa\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.311672 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:19 crc kubenswrapper[4780]: I1205 08:21:19.747404 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 08:21:19 crc kubenswrapper[4780]: W1205 08:21:19.750542 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode85de255_98ca_4e1b_8a26_96597ae078aa.slice/crio-2e0907cfb0e2f4d60a5aa11b231315375a8ac80d4f221d39d7c5bb12713e775f WatchSource:0}: Error finding container 2e0907cfb0e2f4d60a5aa11b231315375a8ac80d4f221d39d7c5bb12713e775f: Status 404 returned error can't find the container with id 2e0907cfb0e2f4d60a5aa11b231315375a8ac80d4f221d39d7c5bb12713e775f Dec 05 08:21:20 crc kubenswrapper[4780]: I1205 08:21:20.151443 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e3410ac-49dd-4132-85d3-0dcea6218053" path="/var/lib/kubelet/pods/6e3410ac-49dd-4132-85d3-0dcea6218053/volumes" Dec 05 08:21:20 crc kubenswrapper[4780]: I1205 08:21:20.624971 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e85de255-98ca-4e1b-8a26-96597ae078aa","Type":"ContainerStarted","Data":"3edbcbc72fe176fa0c0d0cb52740d73e8f4a093db750b51431bd9659ab2f6e1a"} Dec 05 08:21:20 crc kubenswrapper[4780]: I1205 08:21:20.625018 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e85de255-98ca-4e1b-8a26-96597ae078aa","Type":"ContainerStarted","Data":"2e0907cfb0e2f4d60a5aa11b231315375a8ac80d4f221d39d7c5bb12713e775f"} Dec 05 08:21:20 crc kubenswrapper[4780]: I1205 08:21:20.651606 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.651586886 podStartE2EDuration="2.651586886s" podCreationTimestamp="2025-12-05 08:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:21:20.638679986 +0000 UTC m=+5714.708196318" watchObservedRunningTime="2025-12-05 08:21:20.651586886 +0000 UTC m=+5714.721103218" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.397165 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.566685 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-combined-ca-bundle\") pod \"f77e8cf8-05f2-440b-83c2-12916307d33e\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.566863 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgpb5\" (UniqueName: \"kubernetes.io/projected/f77e8cf8-05f2-440b-83c2-12916307d33e-kube-api-access-pgpb5\") pod \"f77e8cf8-05f2-440b-83c2-12916307d33e\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.567010 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-config-data\") pod \"f77e8cf8-05f2-440b-83c2-12916307d33e\" (UID: \"f77e8cf8-05f2-440b-83c2-12916307d33e\") " Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.571612 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77e8cf8-05f2-440b-83c2-12916307d33e-kube-api-access-pgpb5" (OuterVolumeSpecName: "kube-api-access-pgpb5") pod "f77e8cf8-05f2-440b-83c2-12916307d33e" (UID: "f77e8cf8-05f2-440b-83c2-12916307d33e"). InnerVolumeSpecName "kube-api-access-pgpb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.593943 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f77e8cf8-05f2-440b-83c2-12916307d33e" (UID: "f77e8cf8-05f2-440b-83c2-12916307d33e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.594424 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-config-data" (OuterVolumeSpecName: "config-data") pod "f77e8cf8-05f2-440b-83c2-12916307d33e" (UID: "f77e8cf8-05f2-440b-83c2-12916307d33e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.641571 4780 generic.go:334] "Generic (PLEG): container finished" podID="f77e8cf8-05f2-440b-83c2-12916307d33e" containerID="6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74" exitCode=137 Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.641622 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f77e8cf8-05f2-440b-83c2-12916307d33e","Type":"ContainerDied","Data":"6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74"} Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.641653 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f77e8cf8-05f2-440b-83c2-12916307d33e","Type":"ContainerDied","Data":"99b3da8e3e182a77246807dfd7eeba7f4512577d664d5c2f03f3c567b6126bbc"} Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.641670 4780 scope.go:117] "RemoveContainer" containerID="6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.641689 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.669520 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.669560 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgpb5\" (UniqueName: \"kubernetes.io/projected/f77e8cf8-05f2-440b-83c2-12916307d33e-kube-api-access-pgpb5\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.669572 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77e8cf8-05f2-440b-83c2-12916307d33e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.677204 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.678705 4780 scope.go:117] "RemoveContainer" containerID="6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74" Dec 05 08:21:22 crc kubenswrapper[4780]: E1205 08:21:22.679307 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74\": container with ID starting with 6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74 not found: ID does not exist" containerID="6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.679344 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74"} err="failed to get container status \"6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74\": rpc error: code = NotFound desc = could not find container \"6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74\": container with ID starting with 6fbd61585b94e11593e5e6b0b82e84b3b8f3d5be0e3a2c146470de71ce7eda74 not found: ID does not exist" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.698523 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.711145 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:21:22 crc kubenswrapper[4780]: E1205 08:21:22.711675 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77e8cf8-05f2-440b-83c2-12916307d33e" containerName="nova-scheduler-scheduler" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.711694 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77e8cf8-05f2-440b-83c2-12916307d33e" containerName="nova-scheduler-scheduler" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.711969 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77e8cf8-05f2-440b-83c2-12916307d33e" containerName="nova-scheduler-scheduler" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.712699 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.717356 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.721721 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.759441 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.759665 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.871997 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-config-data\") pod \"nova-scheduler-0\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.872049 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.872156 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nlgv\" (UniqueName: \"kubernetes.io/projected/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-kube-api-access-2nlgv\") pod \"nova-scheduler-0\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.974492 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-config-data\") pod \"nova-scheduler-0\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.974838 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.974931 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nlgv\" (UniqueName: \"kubernetes.io/projected/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-kube-api-access-2nlgv\") pod \"nova-scheduler-0\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.977800 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-config-data\") pod \"nova-scheduler-0\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.977857 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " pod="openstack/nova-scheduler-0" Dec 05 08:21:22 crc kubenswrapper[4780]: I1205 08:21:22.992529 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nlgv\" (UniqueName: \"kubernetes.io/projected/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-kube-api-access-2nlgv\") pod \"nova-scheduler-0\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " pod="openstack/nova-scheduler-0" Dec 05 08:21:23 crc kubenswrapper[4780]: I1205 08:21:23.038628 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 08:21:23 crc kubenswrapper[4780]: I1205 08:21:23.484718 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:21:23 crc kubenswrapper[4780]: W1205 08:21:23.488207 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfe8737d_10ba_4e20_81dc_faf3d954d0e7.slice/crio-163d6e59ec15b470d24262f889f7e1b1c6a431effdf687627533af7aab56cd36 WatchSource:0}: Error finding container 163d6e59ec15b470d24262f889f7e1b1c6a431effdf687627533af7aab56cd36: Status 404 returned error can't find the container with id 163d6e59ec15b470d24262f889f7e1b1c6a431effdf687627533af7aab56cd36 Dec 05 08:21:23 crc kubenswrapper[4780]: I1205 08:21:23.651390 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfe8737d-10ba-4e20-81dc-faf3d954d0e7","Type":"ContainerStarted","Data":"163d6e59ec15b470d24262f889f7e1b1c6a431effdf687627533af7aab56cd36"} Dec 05 08:21:23 crc kubenswrapper[4780]: I1205 08:21:23.842078 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:23 crc kubenswrapper[4780]: I1205 08:21:23.842078 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:24 crc kubenswrapper[4780]: I1205 08:21:24.151132 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77e8cf8-05f2-440b-83c2-12916307d33e" path="/var/lib/kubelet/pods/f77e8cf8-05f2-440b-83c2-12916307d33e/volumes" Dec 05 08:21:24 crc kubenswrapper[4780]: I1205 08:21:24.312065 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:24 crc kubenswrapper[4780]: I1205 08:21:24.667140 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfe8737d-10ba-4e20-81dc-faf3d954d0e7","Type":"ContainerStarted","Data":"a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e"} Dec 05 08:21:24 crc kubenswrapper[4780]: I1205 08:21:24.687810 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.687791247 podStartE2EDuration="2.687791247s" podCreationTimestamp="2025-12-05 08:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:21:24.679102411 +0000 UTC m=+5718.748618733" watchObservedRunningTime="2025-12-05 08:21:24.687791247 +0000 UTC m=+5718.757307579" Dec 05 08:21:24 crc kubenswrapper[4780]: I1205 08:21:24.959082 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.85:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:24 crc kubenswrapper[4780]: I1205 08:21:24.959101 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.85:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:28 crc kubenswrapper[4780]: I1205 08:21:28.039265 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 08:21:28 crc kubenswrapper[4780]: I1205 08:21:28.824116 4780 scope.go:117] "RemoveContainer" containerID="d03795b34e323e682341abba024fe2d3c90f2f36cb2799ab0a81cdb1a8faaa97" Dec 05 08:21:28 crc kubenswrapper[4780]: I1205 08:21:28.846158 4780 scope.go:117] "RemoveContainer" containerID="697df2e2fbccdaee7cc5a6262c2dacef5ff13e09b9516ecb857ee16a856b483f" Dec 05 08:21:28 crc kubenswrapper[4780]: I1205 08:21:28.865064 4780 scope.go:117] "RemoveContainer" containerID="7d7defa5f0b96f025b86f67bc0e1d26b2570fc0df74d5ad63367066ce374305d" Dec 05 08:21:28 crc kubenswrapper[4780]: I1205 08:21:28.881146 4780 scope.go:117] "RemoveContainer" containerID="6cc68c21af799afa8290da5e5f229cc099951f4191cb5d6187199ce97d3780a2" Dec 05 08:21:29 crc kubenswrapper[4780]: I1205 08:21:29.312354 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:29 crc kubenswrapper[4780]: I1205 08:21:29.331614 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:29 crc kubenswrapper[4780]: I1205 08:21:29.767588 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 05 08:21:29 crc kubenswrapper[4780]: I1205 08:21:29.908836 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:21:29 crc kubenswrapper[4780]: I1205 08:21:29.909178 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:21:29 crc kubenswrapper[4780]: I1205 08:21:29.920125 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bcvz8"] Dec 05 08:21:29 crc kubenswrapper[4780]: I1205 08:21:29.921355 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:29 crc kubenswrapper[4780]: I1205 08:21:29.923472 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 05 08:21:29 crc kubenswrapper[4780]: I1205 08:21:29.923504 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 05 08:21:29 crc kubenswrapper[4780]: I1205 08:21:29.932767 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bcvz8"] Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.010043 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-config-data\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.010127 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7tm\" (UniqueName: \"kubernetes.io/projected/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-kube-api-access-pm7tm\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.010231 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.010348 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-scripts\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.112646 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm7tm\" (UniqueName: \"kubernetes.io/projected/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-kube-api-access-pm7tm\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.112793 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.112851 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-scripts\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.113024 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-config-data\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.120796 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.139545 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-scripts\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.153495 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-config-data\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.155490 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm7tm\" (UniqueName: \"kubernetes.io/projected/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-kube-api-access-pm7tm\") pod \"nova-cell1-cell-mapping-bcvz8\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.246975 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.700943 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bcvz8"] Dec 05 08:21:30 crc kubenswrapper[4780]: I1205 08:21:30.757040 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bcvz8" event={"ID":"02e1ae93-75ba-4434-8a7d-9f999bab7f8a","Type":"ContainerStarted","Data":"51f89a30451e1c27ae9a23df65236c144305148c18d40492407ce73ee9058787"} Dec 05 08:21:31 crc kubenswrapper[4780]: I1205 08:21:31.767783 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bcvz8" event={"ID":"02e1ae93-75ba-4434-8a7d-9f999bab7f8a","Type":"ContainerStarted","Data":"27dc52d6a2aee452a13408c6a41480b35782838df77c8a5f48350a2ba9843632"} Dec 05 08:21:31 crc kubenswrapper[4780]: I1205 08:21:31.795586 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bcvz8" podStartSLOduration=2.79556284 podStartE2EDuration="2.79556284s" podCreationTimestamp="2025-12-05 08:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:21:31.786242997 +0000 UTC m=+5725.855759329" watchObservedRunningTime="2025-12-05 08:21:31.79556284 +0000 UTC m=+5725.865079172" Dec 05 08:21:33 crc kubenswrapper[4780]: I1205 08:21:33.039310 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 08:21:33 crc kubenswrapper[4780]: I1205 08:21:33.066620 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 08:21:33 crc kubenswrapper[4780]: I1205 08:21:33.811436 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 08:21:33 crc kubenswrapper[4780]: I1205 08:21:33.843145 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:33 crc kubenswrapper[4780]: I1205 08:21:33.843137 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:34 crc kubenswrapper[4780]: I1205 08:21:34.959121 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.85:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:34 crc kubenswrapper[4780]: I1205 08:21:34.959139 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.85:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:21:36 crc kubenswrapper[4780]: I1205 08:21:36.819524 4780 generic.go:334] "Generic (PLEG): container finished" podID="02e1ae93-75ba-4434-8a7d-9f999bab7f8a" containerID="27dc52d6a2aee452a13408c6a41480b35782838df77c8a5f48350a2ba9843632" exitCode=0 Dec 05 08:21:36 crc kubenswrapper[4780]: I1205 08:21:36.819599 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bcvz8" event={"ID":"02e1ae93-75ba-4434-8a7d-9f999bab7f8a","Type":"ContainerDied","Data":"27dc52d6a2aee452a13408c6a41480b35782838df77c8a5f48350a2ba9843632"} Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.241197 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.293579 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-combined-ca-bundle\") pod \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.293663 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-scripts\") pod \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.293843 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-config-data\") pod \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.293889 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm7tm\" (UniqueName: \"kubernetes.io/projected/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-kube-api-access-pm7tm\") pod \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\" (UID: \"02e1ae93-75ba-4434-8a7d-9f999bab7f8a\") " Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.299470 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-kube-api-access-pm7tm" (OuterVolumeSpecName: "kube-api-access-pm7tm") pod "02e1ae93-75ba-4434-8a7d-9f999bab7f8a" (UID: "02e1ae93-75ba-4434-8a7d-9f999bab7f8a"). InnerVolumeSpecName "kube-api-access-pm7tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.299539 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-scripts" (OuterVolumeSpecName: "scripts") pod "02e1ae93-75ba-4434-8a7d-9f999bab7f8a" (UID: "02e1ae93-75ba-4434-8a7d-9f999bab7f8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.325613 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-config-data" (OuterVolumeSpecName: "config-data") pod "02e1ae93-75ba-4434-8a7d-9f999bab7f8a" (UID: "02e1ae93-75ba-4434-8a7d-9f999bab7f8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.329502 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02e1ae93-75ba-4434-8a7d-9f999bab7f8a" (UID: "02e1ae93-75ba-4434-8a7d-9f999bab7f8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.396699 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.397014 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm7tm\" (UniqueName: \"kubernetes.io/projected/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-kube-api-access-pm7tm\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.397098 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.397170 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02e1ae93-75ba-4434-8a7d-9f999bab7f8a-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.839709 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bcvz8" event={"ID":"02e1ae93-75ba-4434-8a7d-9f999bab7f8a","Type":"ContainerDied","Data":"51f89a30451e1c27ae9a23df65236c144305148c18d40492407ce73ee9058787"} Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.839754 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51f89a30451e1c27ae9a23df65236c144305148c18d40492407ce73ee9058787" Dec 05 08:21:38 crc kubenswrapper[4780]: I1205 08:21:38.839753 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bcvz8" Dec 05 08:21:38 crc kubenswrapper[4780]: E1205 08:21:38.947102 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02e1ae93_75ba_4434_8a7d_9f999bab7f8a.slice\": RecentStats: unable to find data in memory cache]" Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.003622 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.004237 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-log" containerID="cri-o://454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183" gracePeriod=30 Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.004343 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-api" containerID="cri-o://78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884" gracePeriod=30 Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.014637 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.014833 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerName="nova-scheduler-scheduler" containerID="cri-o://a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" gracePeriod=30 Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.031019 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.031344 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-log" containerID="cri-o://a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86" gracePeriod=30 Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.031441 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-metadata" containerID="cri-o://594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1" gracePeriod=30 Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.848969 4780 generic.go:334] "Generic (PLEG): container finished" podID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerID="454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183" exitCode=143 Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.849036 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe2c9ef-bff4-4c21-a592-b29594d0eb81","Type":"ContainerDied","Data":"454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183"} Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.850821 4780 generic.go:334] "Generic (PLEG): container finished" podID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerID="a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86" exitCode=143 Dec 05 08:21:39 crc kubenswrapper[4780]: I1205 08:21:39.850850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"db41281b-80f8-40ef-bc48-f71082c6dd00","Type":"ContainerDied","Data":"a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86"} Dec 05 08:21:43 crc kubenswrapper[4780]: E1205 08:21:43.042357 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:43 crc kubenswrapper[4780]: E1205 08:21:43.044476 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:43 crc kubenswrapper[4780]: E1205 08:21:43.046294 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:43 crc kubenswrapper[4780]: E1205 08:21:43.046377 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerName="nova-scheduler-scheduler" Dec 05 08:21:48 crc kubenswrapper[4780]: E1205 08:21:48.041153 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:48 crc kubenswrapper[4780]: E1205 08:21:48.052870 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:48 crc kubenswrapper[4780]: E1205 08:21:48.055097 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:48 crc kubenswrapper[4780]: E1205 08:21:48.055168 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerName="nova-scheduler-scheduler" Dec 05 08:21:48 crc kubenswrapper[4780]: I1205 08:21:48.065272 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6429m"] Dec 05 08:21:48 crc kubenswrapper[4780]: I1205 08:21:48.075081 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-37db-account-create-update-2qq4m"] Dec 05 08:21:48 crc kubenswrapper[4780]: I1205 08:21:48.084918 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-37db-account-create-update-2qq4m"] Dec 05 08:21:48 crc kubenswrapper[4780]: I1205 08:21:48.093393 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6429m"] Dec 05 08:21:48 crc kubenswrapper[4780]: I1205 08:21:48.152260 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e" path="/var/lib/kubelet/pods/7b9fa951-fe33-4e7e-8f01-8bd63e78cf8e/volumes" Dec 05 08:21:48 crc kubenswrapper[4780]: I1205 08:21:48.153491 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa" path="/var/lib/kubelet/pods/c1c817d6-0fa1-4004-a38a-6b2eb5fb25aa/volumes" Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.879109 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.885985 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.971723 4780 generic.go:334] "Generic (PLEG): container finished" podID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerID="594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1" exitCode=0 Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.971824 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"db41281b-80f8-40ef-bc48-f71082c6dd00","Type":"ContainerDied","Data":"594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1"} Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.971958 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"db41281b-80f8-40ef-bc48-f71082c6dd00","Type":"ContainerDied","Data":"6711b128e0e3810c7979e433776cf8ee6db0e4e7a58d08127ade121392324bdd"} Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.971979 4780 scope.go:117] "RemoveContainer" containerID="594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1" Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.972167 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.975106 4780 generic.go:334] "Generic (PLEG): container finished" podID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerID="78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884" exitCode=0 Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.975135 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe2c9ef-bff4-4c21-a592-b29594d0eb81","Type":"ContainerDied","Data":"78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884"} Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.975400 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe2c9ef-bff4-4c21-a592-b29594d0eb81","Type":"ContainerDied","Data":"6239e5190e022d937bb84ddb067e65204c23d3355429be35409d86c2722b12ce"} Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.975156 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:21:52 crc kubenswrapper[4780]: I1205 08:21:52.992105 4780 scope.go:117] "RemoveContainer" containerID="a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.000137 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-config-data\") pod \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.000184 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-combined-ca-bundle\") pod \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.000254 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwt46\" (UniqueName: \"kubernetes.io/projected/db41281b-80f8-40ef-bc48-f71082c6dd00-kube-api-access-xwt46\") pod \"db41281b-80f8-40ef-bc48-f71082c6dd00\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.000322 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-config-data\") pod \"db41281b-80f8-40ef-bc48-f71082c6dd00\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.000355 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe2c9ef-bff4-4c21-a592-b29594d0eb81-logs\") pod \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.000386 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-combined-ca-bundle\") pod \"db41281b-80f8-40ef-bc48-f71082c6dd00\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.000418 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-nova-metadata-tls-certs\") pod \"db41281b-80f8-40ef-bc48-f71082c6dd00\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.000494 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db41281b-80f8-40ef-bc48-f71082c6dd00-logs\") pod \"db41281b-80f8-40ef-bc48-f71082c6dd00\" (UID: \"db41281b-80f8-40ef-bc48-f71082c6dd00\") " Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.000709 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4gzk\" (UniqueName: \"kubernetes.io/projected/efe2c9ef-bff4-4c21-a592-b29594d0eb81-kube-api-access-z4gzk\") pod \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\" (UID: \"efe2c9ef-bff4-4c21-a592-b29594d0eb81\") " Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.001731 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe2c9ef-bff4-4c21-a592-b29594d0eb81-logs" (OuterVolumeSpecName: "logs") pod "efe2c9ef-bff4-4c21-a592-b29594d0eb81" (UID: "efe2c9ef-bff4-4c21-a592-b29594d0eb81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.001920 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db41281b-80f8-40ef-bc48-f71082c6dd00-logs" (OuterVolumeSpecName: "logs") pod "db41281b-80f8-40ef-bc48-f71082c6dd00" (UID: "db41281b-80f8-40ef-bc48-f71082c6dd00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.005767 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe2c9ef-bff4-4c21-a592-b29594d0eb81-kube-api-access-z4gzk" (OuterVolumeSpecName: "kube-api-access-z4gzk") pod "efe2c9ef-bff4-4c21-a592-b29594d0eb81" (UID: "efe2c9ef-bff4-4c21-a592-b29594d0eb81"). InnerVolumeSpecName "kube-api-access-z4gzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.008319 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db41281b-80f8-40ef-bc48-f71082c6dd00-kube-api-access-xwt46" (OuterVolumeSpecName: "kube-api-access-xwt46") pod "db41281b-80f8-40ef-bc48-f71082c6dd00" (UID: "db41281b-80f8-40ef-bc48-f71082c6dd00"). InnerVolumeSpecName "kube-api-access-xwt46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.012649 4780 scope.go:117] "RemoveContainer" containerID="594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1" Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.013111 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1\": container with ID starting with 594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1 not found: ID does not exist" containerID="594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.013184 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1"} err="failed to get container status \"594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1\": rpc error: code = NotFound desc = could not find container \"594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1\": container with ID starting with 594e051f804bccf31ffb34fe0cb54736a3fc227c28c9e83183741d2c884bb4a1 not found: ID does not exist" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.013207 4780 scope.go:117] "RemoveContainer" containerID="a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86" Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.013539 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86\": container with ID starting with a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86 not found: ID does not exist" containerID="a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.013637 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86"} err="failed to get container status \"a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86\": rpc error: code = NotFound desc = could not find container \"a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86\": container with ID starting with a66e2444e0e208b5ea8c067d6da72894b25e2b81ba9e7f192d030d8dfc4f7e86 not found: ID does not exist" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.013711 4780 scope.go:117] "RemoveContainer" containerID="78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.031411 4780 scope.go:117] "RemoveContainer" containerID="454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.033818 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efe2c9ef-bff4-4c21-a592-b29594d0eb81" (UID: "efe2c9ef-bff4-4c21-a592-b29594d0eb81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.034285 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-config-data" (OuterVolumeSpecName: "config-data") pod "db41281b-80f8-40ef-bc48-f71082c6dd00" (UID: "db41281b-80f8-40ef-bc48-f71082c6dd00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.035556 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db41281b-80f8-40ef-bc48-f71082c6dd00" (UID: "db41281b-80f8-40ef-bc48-f71082c6dd00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.037366 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-config-data" (OuterVolumeSpecName: "config-data") pod "efe2c9ef-bff4-4c21-a592-b29594d0eb81" (UID: "efe2c9ef-bff4-4c21-a592-b29594d0eb81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.043043 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.044605 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.046623 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.046743 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerName="nova-scheduler-scheduler" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.047619 4780 scope.go:117] "RemoveContainer" containerID="78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884" Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.048028 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884\": container with ID starting with 78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884 not found: ID does not exist" containerID="78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.048082 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884"} err="failed to get container status \"78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884\": rpc error: code = NotFound desc = could not find container \"78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884\": container with ID starting with 78148e56416d17cabbdcb8d11caf80dbdf3a092596fc018445304de0d8b69884 not found: ID does not exist" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.048113 4780 scope.go:117] "RemoveContainer" containerID="454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183" Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.048389 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183\": container with ID starting with 454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183 not found: ID does not exist" containerID="454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.048519 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183"} err="failed to get container status \"454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183\": rpc error: code = NotFound desc = could not find container \"454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183\": container with ID starting with 454715cd9167aa45ae7d8c62237485f3540e943cc6abca3b05f388ec0482e183 not found: ID does not exist" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.052469 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "db41281b-80f8-40ef-bc48-f71082c6dd00" (UID: "db41281b-80f8-40ef-bc48-f71082c6dd00"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.102612 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4gzk\" (UniqueName: \"kubernetes.io/projected/efe2c9ef-bff4-4c21-a592-b29594d0eb81-kube-api-access-z4gzk\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.102644 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.102655 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe2c9ef-bff4-4c21-a592-b29594d0eb81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.102664 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwt46\" (UniqueName: \"kubernetes.io/projected/db41281b-80f8-40ef-bc48-f71082c6dd00-kube-api-access-xwt46\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.102673 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.102681 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe2c9ef-bff4-4c21-a592-b29594d0eb81-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.102689 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.102697 4780 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/db41281b-80f8-40ef-bc48-f71082c6dd00-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.102705 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db41281b-80f8-40ef-bc48-f71082c6dd00-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.312953 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.330299 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.343921 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.352713 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.364184 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.364754 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-metadata" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.364780 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-metadata" Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.364797 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-log" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.364805 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-log" Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.364836 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-api" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.364843 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-api" Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.364871 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e1ae93-75ba-4434-8a7d-9f999bab7f8a" containerName="nova-manage" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.364897 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e1ae93-75ba-4434-8a7d-9f999bab7f8a" containerName="nova-manage" Dec 05 08:21:53 crc kubenswrapper[4780]: E1205 08:21:53.364909 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-log" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.364916 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-log" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.365185 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-log" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.365204 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-log" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.365217 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e1ae93-75ba-4434-8a7d-9f999bab7f8a" containerName="nova-manage" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.365224 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" containerName="nova-metadata-metadata" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.365235 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" containerName="nova-api-api" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.366226 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.368238 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.368494 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.376806 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.378909 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.382735 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.387117 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.397148 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.514418 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5190e0da-b23d-48c6-9a54-d699ff479332-logs\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.514506 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.514559 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.514654 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-config-data\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.514710 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.514741 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v4wt\" (UniqueName: \"kubernetes.io/projected/5190e0da-b23d-48c6-9a54-d699ff479332-kube-api-access-7v4wt\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.514823 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8st7\" (UniqueName: \"kubernetes.io/projected/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-kube-api-access-b8st7\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.514866 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-config-data\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.514915 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-logs\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.617106 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8st7\" (UniqueName: \"kubernetes.io/projected/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-kube-api-access-b8st7\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.617161 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-config-data\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.617186 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-logs\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.617258 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5190e0da-b23d-48c6-9a54-d699ff479332-logs\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.617283 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.617315 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.617342 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-config-data\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.617382 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.617399 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v4wt\" (UniqueName: \"kubernetes.io/projected/5190e0da-b23d-48c6-9a54-d699ff479332-kube-api-access-7v4wt\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.618565 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-logs\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.618966 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5190e0da-b23d-48c6-9a54-d699ff479332-logs\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.625793 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-config-data\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.626040 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.626502 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-config-data\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.626722 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.627385 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.635420 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8st7\" (UniqueName: \"kubernetes.io/projected/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-kube-api-access-b8st7\") pod \"nova-metadata-0\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.638012 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v4wt\" (UniqueName: \"kubernetes.io/projected/5190e0da-b23d-48c6-9a54-d699ff479332-kube-api-access-7v4wt\") pod \"nova-api-0\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " pod="openstack/nova-api-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.689072 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 08:21:53 crc kubenswrapper[4780]: I1205 08:21:53.697433 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:21:54 crc kubenswrapper[4780]: I1205 08:21:54.137222 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 08:21:54 crc kubenswrapper[4780]: I1205 08:21:54.149285 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db41281b-80f8-40ef-bc48-f71082c6dd00" path="/var/lib/kubelet/pods/db41281b-80f8-40ef-bc48-f71082c6dd00/volumes" Dec 05 08:21:54 crc kubenswrapper[4780]: I1205 08:21:54.150547 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe2c9ef-bff4-4c21-a592-b29594d0eb81" path="/var/lib/kubelet/pods/efe2c9ef-bff4-4c21-a592-b29594d0eb81/volumes" Dec 05 08:21:54 crc kubenswrapper[4780]: I1205 08:21:54.191587 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:21:54 crc kubenswrapper[4780]: W1205 08:21:54.226162 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5190e0da_b23d_48c6_9a54_d699ff479332.slice/crio-52788695f630e9c95c4abd946d461596df098b09929741ffe2afd9487931372b WatchSource:0}: Error finding container 52788695f630e9c95c4abd946d461596df098b09929741ffe2afd9487931372b: Status 404 returned error can't find the container with id 52788695f630e9c95c4abd946d461596df098b09929741ffe2afd9487931372b Dec 05 08:21:55 crc kubenswrapper[4780]: I1205 08:21:55.011177 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5190e0da-b23d-48c6-9a54-d699ff479332","Type":"ContainerStarted","Data":"eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616"} Dec 05 08:21:55 crc kubenswrapper[4780]: I1205 08:21:55.011562 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5190e0da-b23d-48c6-9a54-d699ff479332","Type":"ContainerStarted","Data":"a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c"} Dec 05 08:21:55 crc kubenswrapper[4780]: I1205 08:21:55.011580 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5190e0da-b23d-48c6-9a54-d699ff479332","Type":"ContainerStarted","Data":"52788695f630e9c95c4abd946d461596df098b09929741ffe2afd9487931372b"} Dec 05 08:21:55 crc kubenswrapper[4780]: I1205 08:21:55.014517 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75b7f6c3-470e-43ba-98c7-d474cd9f65b5","Type":"ContainerStarted","Data":"607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120"} Dec 05 08:21:55 crc kubenswrapper[4780]: I1205 08:21:55.014550 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75b7f6c3-470e-43ba-98c7-d474cd9f65b5","Type":"ContainerStarted","Data":"2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332"} Dec 05 08:21:55 crc kubenswrapper[4780]: I1205 08:21:55.014564 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75b7f6c3-470e-43ba-98c7-d474cd9f65b5","Type":"ContainerStarted","Data":"661cc7582eed76348caa22e705e5379900b5207aa595f2f892383135d59c0360"} Dec 05 08:21:55 crc kubenswrapper[4780]: I1205 08:21:55.032408 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.032386407 podStartE2EDuration="2.032386407s" podCreationTimestamp="2025-12-05 08:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:21:55.025570553 +0000 UTC m=+5749.095086905" watchObservedRunningTime="2025-12-05 08:21:55.032386407 +0000 UTC m=+5749.101902739" Dec 05 08:21:55 crc kubenswrapper[4780]: I1205 08:21:55.047619 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.047594111 podStartE2EDuration="2.047594111s" podCreationTimestamp="2025-12-05 08:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:21:55.043110089 +0000 UTC m=+5749.112626421" watchObservedRunningTime="2025-12-05 08:21:55.047594111 +0000 UTC m=+5749.117110443" Dec 05 08:21:58 crc kubenswrapper[4780]: E1205 08:21:58.042146 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:58 crc kubenswrapper[4780]: E1205 08:21:58.044658 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:58 crc kubenswrapper[4780]: E1205 08:21:58.048782 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:21:58 crc kubenswrapper[4780]: E1205 08:21:58.048849 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerName="nova-scheduler-scheduler" Dec 05 08:21:58 crc kubenswrapper[4780]: I1205 08:21:58.690272 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 08:21:58 crc kubenswrapper[4780]: I1205 08:21:58.690324 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 08:21:59 crc kubenswrapper[4780]: I1205 08:21:59.041188 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-np2gw"] Dec 05 08:21:59 crc kubenswrapper[4780]: I1205 08:21:59.052576 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-np2gw"] Dec 05 08:21:59 crc kubenswrapper[4780]: I1205 08:21:59.907477 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:21:59 crc kubenswrapper[4780]: I1205 08:21:59.907860 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:22:00 crc kubenswrapper[4780]: I1205 08:22:00.148661 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b355fa2-1090-4e02-b17d-323cd82a2b06" path="/var/lib/kubelet/pods/4b355fa2-1090-4e02-b17d-323cd82a2b06/volumes" Dec 05 08:22:03 crc kubenswrapper[4780]: E1205 08:22:03.041813 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:22:03 crc kubenswrapper[4780]: E1205 08:22:03.043990 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:22:03 crc kubenswrapper[4780]: E1205 08:22:03.046051 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:22:03 crc kubenswrapper[4780]: E1205 08:22:03.046145 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerName="nova-scheduler-scheduler" Dec 05 08:22:03 crc kubenswrapper[4780]: I1205 08:22:03.689842 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 08:22:03 crc kubenswrapper[4780]: I1205 08:22:03.690297 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 08:22:03 crc kubenswrapper[4780]: I1205 08:22:03.698051 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 08:22:03 crc kubenswrapper[4780]: I1205 08:22:03.698127 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 08:22:04 crc kubenswrapper[4780]: I1205 08:22:04.794988 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.89:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:22:04 crc kubenswrapper[4780]: I1205 08:22:04.794978 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:22:04 crc kubenswrapper[4780]: I1205 08:22:04.795126 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:22:04 crc kubenswrapper[4780]: I1205 08:22:04.794856 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.89:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:22:08 crc kubenswrapper[4780]: E1205 08:22:08.042636 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:22:08 crc kubenswrapper[4780]: E1205 08:22:08.045442 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:22:08 crc kubenswrapper[4780]: E1205 08:22:08.046987 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 08:22:08 crc kubenswrapper[4780]: E1205 08:22:08.047024 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerName="nova-scheduler-scheduler" Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.156527 4780 generic.go:334] "Generic (PLEG): container finished" podID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" exitCode=137 Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.156949 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfe8737d-10ba-4e20-81dc-faf3d954d0e7","Type":"ContainerDied","Data":"a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e"} Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.245240 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.295734 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nlgv\" (UniqueName: \"kubernetes.io/projected/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-kube-api-access-2nlgv\") pod \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.295846 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-combined-ca-bundle\") pod \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.296117 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-config-data\") pod \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\" (UID: \"cfe8737d-10ba-4e20-81dc-faf3d954d0e7\") " Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.301849 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-kube-api-access-2nlgv" (OuterVolumeSpecName: "kube-api-access-2nlgv") pod "cfe8737d-10ba-4e20-81dc-faf3d954d0e7" (UID: "cfe8737d-10ba-4e20-81dc-faf3d954d0e7"). InnerVolumeSpecName "kube-api-access-2nlgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.322368 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfe8737d-10ba-4e20-81dc-faf3d954d0e7" (UID: "cfe8737d-10ba-4e20-81dc-faf3d954d0e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.328490 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-config-data" (OuterVolumeSpecName: "config-data") pod "cfe8737d-10ba-4e20-81dc-faf3d954d0e7" (UID: "cfe8737d-10ba-4e20-81dc-faf3d954d0e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.397602 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.397630 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nlgv\" (UniqueName: \"kubernetes.io/projected/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-kube-api-access-2nlgv\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:11 crc kubenswrapper[4780]: I1205 08:22:11.397641 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe8737d-10ba-4e20-81dc-faf3d954d0e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.171507 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfe8737d-10ba-4e20-81dc-faf3d954d0e7","Type":"ContainerDied","Data":"163d6e59ec15b470d24262f889f7e1b1c6a431effdf687627533af7aab56cd36"} Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.171826 4780 scope.go:117] "RemoveContainer" containerID="a6a908f351d9bb331f9c10c8ff8e361f7699b50d98ff5392c0958ae0b5799c2e" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.171974 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.198712 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.226491 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.236333 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:22:12 crc kubenswrapper[4780]: E1205 08:22:12.236773 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerName="nova-scheduler-scheduler" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.236792 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerName="nova-scheduler-scheduler" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.236998 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" containerName="nova-scheduler-scheduler" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.237643 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.240227 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.248150 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.418281 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-config-data\") pod \"nova-scheduler-0\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.418327 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.418353 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhctf\" (UniqueName: \"kubernetes.io/projected/99fc83fe-5001-446e-aeca-106c7a5fd5ed-kube-api-access-rhctf\") pod \"nova-scheduler-0\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.521200 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-config-data\") pod \"nova-scheduler-0\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.521271 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.521302 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhctf\" (UniqueName: \"kubernetes.io/projected/99fc83fe-5001-446e-aeca-106c7a5fd5ed-kube-api-access-rhctf\") pod \"nova-scheduler-0\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.529625 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.534565 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-config-data\") pod \"nova-scheduler-0\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.537815 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhctf\" (UniqueName: \"kubernetes.io/projected/99fc83fe-5001-446e-aeca-106c7a5fd5ed-kube-api-access-rhctf\") pod \"nova-scheduler-0\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " pod="openstack/nova-scheduler-0" Dec 05 08:22:12 crc kubenswrapper[4780]: I1205 08:22:12.566034 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.050581 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-69nl5"] Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.058942 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-69nl5"] Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.067913 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.182446 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99fc83fe-5001-446e-aeca-106c7a5fd5ed","Type":"ContainerStarted","Data":"6e7e732a7f6086d6a6b6f192b30c79780eb8b09caa5c74f772ba7da5965eb4d4"} Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.694996 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.702094 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.702341 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.702927 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.704545 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.704619 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 08:22:13 crc kubenswrapper[4780]: I1205 08:22:13.707124 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.152443 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="242f5b62-3d4e-4de1-ba14-3a3acce4a455" path="/var/lib/kubelet/pods/242f5b62-3d4e-4de1-ba14-3a3acce4a455/volumes" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.153332 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfe8737d-10ba-4e20-81dc-faf3d954d0e7" path="/var/lib/kubelet/pods/cfe8737d-10ba-4e20-81dc-faf3d954d0e7/volumes" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.196133 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99fc83fe-5001-446e-aeca-106c7a5fd5ed","Type":"ContainerStarted","Data":"3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490"} Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.197167 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.200304 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.202104 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.218634 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.218616423 podStartE2EDuration="2.218616423s" podCreationTimestamp="2025-12-05 08:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:22:14.214620935 +0000 UTC m=+5768.284137267" watchObservedRunningTime="2025-12-05 08:22:14.218616423 +0000 UTC m=+5768.288132745" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.493778 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7976bdf7b5-kctmz"] Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.495533 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.514404 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7976bdf7b5-kctmz"] Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.624655 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwbnw\" (UniqueName: \"kubernetes.io/projected/aaa7f853-e31b-4193-a8fd-761904d6671e-kube-api-access-nwbnw\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.624721 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-dns-svc\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.624772 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-nb\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.624813 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-config\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.624844 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-sb\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.727732 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-dns-svc\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.727803 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-nb\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.727839 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-config\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.727864 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-sb\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.727987 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwbnw\" (UniqueName: \"kubernetes.io/projected/aaa7f853-e31b-4193-a8fd-761904d6671e-kube-api-access-nwbnw\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.729177 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-nb\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.729227 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-sb\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.729243 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-config\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.729749 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-dns-svc\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.747364 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwbnw\" (UniqueName: \"kubernetes.io/projected/aaa7f853-e31b-4193-a8fd-761904d6671e-kube-api-access-nwbnw\") pod \"dnsmasq-dns-7976bdf7b5-kctmz\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:14 crc kubenswrapper[4780]: I1205 08:22:14.827517 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:15 crc kubenswrapper[4780]: I1205 08:22:15.313382 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7976bdf7b5-kctmz"] Dec 05 08:22:16 crc kubenswrapper[4780]: I1205 08:22:16.233955 4780 generic.go:334] "Generic (PLEG): container finished" podID="aaa7f853-e31b-4193-a8fd-761904d6671e" containerID="cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec" exitCode=0 Dec 05 08:22:16 crc kubenswrapper[4780]: I1205 08:22:16.234303 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" event={"ID":"aaa7f853-e31b-4193-a8fd-761904d6671e","Type":"ContainerDied","Data":"cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec"} Dec 05 08:22:16 crc kubenswrapper[4780]: I1205 08:22:16.234378 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" event={"ID":"aaa7f853-e31b-4193-a8fd-761904d6671e","Type":"ContainerStarted","Data":"f43080646a66ec609481503cfdf90ea48fd9baaf0fca017a197946caea823fbb"} Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.157022 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.244143 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" event={"ID":"aaa7f853-e31b-4193-a8fd-761904d6671e","Type":"ContainerStarted","Data":"966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c"} Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.244515 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" containerName="nova-api-log" containerID="cri-o://a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c" gracePeriod=30 Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.244581 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" containerName="nova-api-api" containerID="cri-o://eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616" gracePeriod=30 Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.499836 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" podStartSLOduration=3.499818039 podStartE2EDuration="3.499818039s" podCreationTimestamp="2025-12-05 08:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:22:17.276278198 +0000 UTC m=+5771.345794530" watchObservedRunningTime="2025-12-05 08:22:17.499818039 +0000 UTC m=+5771.569334371" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.506199 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z9fmf"] Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.508038 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.520550 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9fmf"] Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.566706 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.585049 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-utilities\") pod \"redhat-operators-z9fmf\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.585303 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s48qz\" (UniqueName: \"kubernetes.io/projected/916c5ee9-c74a-4163-a4af-f0b875616e84-kube-api-access-s48qz\") pod \"redhat-operators-z9fmf\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.585559 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-catalog-content\") pod \"redhat-operators-z9fmf\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.686355 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-utilities\") pod \"redhat-operators-z9fmf\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.686408 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s48qz\" (UniqueName: \"kubernetes.io/projected/916c5ee9-c74a-4163-a4af-f0b875616e84-kube-api-access-s48qz\") pod \"redhat-operators-z9fmf\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.686473 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-catalog-content\") pod \"redhat-operators-z9fmf\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.686932 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-catalog-content\") pod \"redhat-operators-z9fmf\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.687136 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-utilities\") pod \"redhat-operators-z9fmf\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.705313 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s48qz\" (UniqueName: \"kubernetes.io/projected/916c5ee9-c74a-4163-a4af-f0b875616e84-kube-api-access-s48qz\") pod \"redhat-operators-z9fmf\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:17 crc kubenswrapper[4780]: I1205 08:22:17.827462 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:18 crc kubenswrapper[4780]: I1205 08:22:18.253870 4780 generic.go:334] "Generic (PLEG): container finished" podID="5190e0da-b23d-48c6-9a54-d699ff479332" containerID="a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c" exitCode=143 Dec 05 08:22:18 crc kubenswrapper[4780]: I1205 08:22:18.253942 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5190e0da-b23d-48c6-9a54-d699ff479332","Type":"ContainerDied","Data":"a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c"} Dec 05 08:22:18 crc kubenswrapper[4780]: I1205 08:22:18.254338 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:18 crc kubenswrapper[4780]: I1205 08:22:18.309363 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9fmf"] Dec 05 08:22:19 crc kubenswrapper[4780]: I1205 08:22:19.264473 4780 generic.go:334] "Generic (PLEG): container finished" podID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerID="867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e" exitCode=0 Dec 05 08:22:19 crc kubenswrapper[4780]: I1205 08:22:19.264531 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9fmf" event={"ID":"916c5ee9-c74a-4163-a4af-f0b875616e84","Type":"ContainerDied","Data":"867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e"} Dec 05 08:22:19 crc kubenswrapper[4780]: I1205 08:22:19.264781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9fmf" event={"ID":"916c5ee9-c74a-4163-a4af-f0b875616e84","Type":"ContainerStarted","Data":"7452ed9473ce3b82a62a180556e7900aeea74c914815047125dace13126f061b"} Dec 05 08:22:19 crc kubenswrapper[4780]: I1205 08:22:19.266759 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 08:22:20 crc kubenswrapper[4780]: I1205 08:22:20.848561 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:22:20 crc kubenswrapper[4780]: I1205 08:22:20.975102 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-combined-ca-bundle\") pod \"5190e0da-b23d-48c6-9a54-d699ff479332\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " Dec 05 08:22:20 crc kubenswrapper[4780]: I1205 08:22:20.975176 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-config-data\") pod \"5190e0da-b23d-48c6-9a54-d699ff479332\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " Dec 05 08:22:20 crc kubenswrapper[4780]: I1205 08:22:20.975257 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v4wt\" (UniqueName: \"kubernetes.io/projected/5190e0da-b23d-48c6-9a54-d699ff479332-kube-api-access-7v4wt\") pod \"5190e0da-b23d-48c6-9a54-d699ff479332\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " Dec 05 08:22:20 crc kubenswrapper[4780]: I1205 08:22:20.975419 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5190e0da-b23d-48c6-9a54-d699ff479332-logs\") pod \"5190e0da-b23d-48c6-9a54-d699ff479332\" (UID: \"5190e0da-b23d-48c6-9a54-d699ff479332\") " Dec 05 08:22:20 crc kubenswrapper[4780]: I1205 08:22:20.976123 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5190e0da-b23d-48c6-9a54-d699ff479332-logs" (OuterVolumeSpecName: "logs") pod "5190e0da-b23d-48c6-9a54-d699ff479332" (UID: "5190e0da-b23d-48c6-9a54-d699ff479332"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:22:20 crc kubenswrapper[4780]: I1205 08:22:20.994222 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5190e0da-b23d-48c6-9a54-d699ff479332-kube-api-access-7v4wt" (OuterVolumeSpecName: "kube-api-access-7v4wt") pod "5190e0da-b23d-48c6-9a54-d699ff479332" (UID: "5190e0da-b23d-48c6-9a54-d699ff479332"). InnerVolumeSpecName "kube-api-access-7v4wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.011701 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5190e0da-b23d-48c6-9a54-d699ff479332" (UID: "5190e0da-b23d-48c6-9a54-d699ff479332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.016054 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-config-data" (OuterVolumeSpecName: "config-data") pod "5190e0da-b23d-48c6-9a54-d699ff479332" (UID: "5190e0da-b23d-48c6-9a54-d699ff479332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.078388 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5190e0da-b23d-48c6-9a54-d699ff479332-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.078420 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.078430 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5190e0da-b23d-48c6-9a54-d699ff479332-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.078441 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v4wt\" (UniqueName: \"kubernetes.io/projected/5190e0da-b23d-48c6-9a54-d699ff479332-kube-api-access-7v4wt\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.285518 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9fmf" event={"ID":"916c5ee9-c74a-4163-a4af-f0b875616e84","Type":"ContainerStarted","Data":"7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8"} Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.287916 4780 generic.go:334] "Generic (PLEG): container finished" podID="5190e0da-b23d-48c6-9a54-d699ff479332" containerID="eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616" exitCode=0 Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.287957 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5190e0da-b23d-48c6-9a54-d699ff479332","Type":"ContainerDied","Data":"eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616"} Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.287980 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5190e0da-b23d-48c6-9a54-d699ff479332","Type":"ContainerDied","Data":"52788695f630e9c95c4abd946d461596df098b09929741ffe2afd9487931372b"} Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.287996 4780 scope.go:117] "RemoveContainer" containerID="eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.288039 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.335304 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.344660 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.353337 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 08:22:21 crc kubenswrapper[4780]: E1205 08:22:21.353750 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" containerName="nova-api-log" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.353769 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" containerName="nova-api-log" Dec 05 08:22:21 crc kubenswrapper[4780]: E1205 08:22:21.353777 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" containerName="nova-api-api" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.353783 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" containerName="nova-api-api" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.354013 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" containerName="nova-api-api" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.354033 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" containerName="nova-api-log" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.355073 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.357137 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.357196 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.357237 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.367757 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.383583 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6021f8a7-e206-46aa-8faf-3383d3594e72-logs\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.383650 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddvn9\" (UniqueName: \"kubernetes.io/projected/6021f8a7-e206-46aa-8faf-3383d3594e72-kube-api-access-ddvn9\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.383818 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-config-data\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.383841 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.384594 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-public-tls-certs\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.384717 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.486496 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-config-data\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.486857 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.486909 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-public-tls-certs\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.486940 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.487001 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6021f8a7-e206-46aa-8faf-3383d3594e72-logs\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.487034 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddvn9\" (UniqueName: \"kubernetes.io/projected/6021f8a7-e206-46aa-8faf-3383d3594e72-kube-api-access-ddvn9\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.735673 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6021f8a7-e206-46aa-8faf-3383d3594e72-logs\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.738624 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddvn9\" (UniqueName: \"kubernetes.io/projected/6021f8a7-e206-46aa-8faf-3383d3594e72-kube-api-access-ddvn9\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.738915 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.739524 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-config-data\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.739836 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.745840 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-public-tls-certs\") pod \"nova-api-0\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " pod="openstack/nova-api-0" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.748789 4780 scope.go:117] "RemoveContainer" containerID="a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.849789 4780 scope.go:117] "RemoveContainer" containerID="eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616" Dec 05 08:22:21 crc kubenswrapper[4780]: E1205 08:22:21.850379 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616\": container with ID starting with eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616 not found: ID does not exist" containerID="eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.850422 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616"} err="failed to get container status \"eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616\": rpc error: code = NotFound desc = could not find container \"eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616\": container with ID starting with eba0affe2e47e6a72e0c67caa4c1cbb51a749179d1e6c51c8f81dc45b1c4c616 not found: ID does not exist" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.850449 4780 scope.go:117] "RemoveContainer" containerID="a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c" Dec 05 08:22:21 crc kubenswrapper[4780]: E1205 08:22:21.850903 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c\": container with ID starting with a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c not found: ID does not exist" containerID="a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c" Dec 05 08:22:21 crc kubenswrapper[4780]: I1205 08:22:21.850925 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c"} err="failed to get container status \"a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c\": rpc error: code = NotFound desc = could not find container \"a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c\": container with ID starting with a0a904e2ae5b71516f2ed9708da9f72a4479bfa8b726fa43e66367ef1a67b91c not found: ID does not exist" Dec 05 08:22:22 crc kubenswrapper[4780]: I1205 08:22:22.036682 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 08:22:22 crc kubenswrapper[4780]: I1205 08:22:22.154900 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5190e0da-b23d-48c6-9a54-d699ff479332" path="/var/lib/kubelet/pods/5190e0da-b23d-48c6-9a54-d699ff479332/volumes" Dec 05 08:22:22 crc kubenswrapper[4780]: I1205 08:22:22.299267 4780 generic.go:334] "Generic (PLEG): container finished" podID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerID="7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8" exitCode=0 Dec 05 08:22:22 crc kubenswrapper[4780]: I1205 08:22:22.299343 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9fmf" event={"ID":"916c5ee9-c74a-4163-a4af-f0b875616e84","Type":"ContainerDied","Data":"7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8"} Dec 05 08:22:22 crc kubenswrapper[4780]: I1205 08:22:22.493147 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 08:22:22 crc kubenswrapper[4780]: I1205 08:22:22.566939 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 08:22:22 crc kubenswrapper[4780]: I1205 08:22:22.599169 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 08:22:23 crc kubenswrapper[4780]: I1205 08:22:23.318340 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6021f8a7-e206-46aa-8faf-3383d3594e72","Type":"ContainerStarted","Data":"e6a4321bf3c053841065adfbea756c3077b6543843c82dd3ba04a04a0b01521b"} Dec 05 08:22:23 crc kubenswrapper[4780]: I1205 08:22:23.347211 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 08:22:24 crc kubenswrapper[4780]: I1205 08:22:24.341226 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6021f8a7-e206-46aa-8faf-3383d3594e72","Type":"ContainerStarted","Data":"3fc22ff04d7899de939f4613a4322bfc38236dc65241afca65beaad0c6d22f57"} Dec 05 08:22:24 crc kubenswrapper[4780]: I1205 08:22:24.828934 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:22:24 crc kubenswrapper[4780]: I1205 08:22:24.904367 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54c966dbdf-p4mwh"] Dec 05 08:22:24 crc kubenswrapper[4780]: I1205 08:22:24.909394 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" podUID="cf57fe22-8606-4ec1-a301-1a33d1d3cb22" containerName="dnsmasq-dns" containerID="cri-o://d930fc3f58e1d4756fd2e44548f493a5f83e01dfac69ed8690360afdf73d00f0" gracePeriod=10 Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.355136 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6021f8a7-e206-46aa-8faf-3383d3594e72","Type":"ContainerStarted","Data":"855c8dc83e0f8ffa8ce1d99240f11d41a23fae233257c693de1ce81bec912dad"} Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.357867 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9fmf" event={"ID":"916c5ee9-c74a-4163-a4af-f0b875616e84","Type":"ContainerStarted","Data":"87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a"} Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.365263 4780 generic.go:334] "Generic (PLEG): container finished" podID="cf57fe22-8606-4ec1-a301-1a33d1d3cb22" containerID="d930fc3f58e1d4756fd2e44548f493a5f83e01dfac69ed8690360afdf73d00f0" exitCode=0 Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.365307 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" event={"ID":"cf57fe22-8606-4ec1-a301-1a33d1d3cb22","Type":"ContainerDied","Data":"d930fc3f58e1d4756fd2e44548f493a5f83e01dfac69ed8690360afdf73d00f0"} Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.365332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" event={"ID":"cf57fe22-8606-4ec1-a301-1a33d1d3cb22","Type":"ContainerDied","Data":"58b990357ab15c28f9ce53ddbe0a4dea6d54bce6df3c6ded919662ccce839cba"} Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.365344 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58b990357ab15c28f9ce53ddbe0a4dea6d54bce6df3c6ded919662ccce839cba" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.378226 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.378208802 podStartE2EDuration="4.378208802s" podCreationTimestamp="2025-12-05 08:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:22:25.377228495 +0000 UTC m=+5779.446744827" watchObservedRunningTime="2025-12-05 08:22:25.378208802 +0000 UTC m=+5779.447725134" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.400961 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.403300 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z9fmf" podStartSLOduration=3.448709836 podStartE2EDuration="8.403270984s" podCreationTimestamp="2025-12-05 08:22:17 +0000 UTC" firstStartedPulling="2025-12-05 08:22:19.266515481 +0000 UTC m=+5773.336031813" lastFinishedPulling="2025-12-05 08:22:24.221076629 +0000 UTC m=+5778.290592961" observedRunningTime="2025-12-05 08:22:25.392383508 +0000 UTC m=+5779.461899860" watchObservedRunningTime="2025-12-05 08:22:25.403270984 +0000 UTC m=+5779.472787326" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.485976 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-config\") pod \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.486026 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrq4q\" (UniqueName: \"kubernetes.io/projected/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-kube-api-access-mrq4q\") pod \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.486098 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-dns-svc\") pod \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.486182 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-sb\") pod \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.486225 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-nb\") pod \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\" (UID: \"cf57fe22-8606-4ec1-a301-1a33d1d3cb22\") " Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.513281 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-kube-api-access-mrq4q" (OuterVolumeSpecName: "kube-api-access-mrq4q") pod "cf57fe22-8606-4ec1-a301-1a33d1d3cb22" (UID: "cf57fe22-8606-4ec1-a301-1a33d1d3cb22"). InnerVolumeSpecName "kube-api-access-mrq4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.542742 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf57fe22-8606-4ec1-a301-1a33d1d3cb22" (UID: "cf57fe22-8606-4ec1-a301-1a33d1d3cb22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.558596 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf57fe22-8606-4ec1-a301-1a33d1d3cb22" (UID: "cf57fe22-8606-4ec1-a301-1a33d1d3cb22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.562503 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-config" (OuterVolumeSpecName: "config") pod "cf57fe22-8606-4ec1-a301-1a33d1d3cb22" (UID: "cf57fe22-8606-4ec1-a301-1a33d1d3cb22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.580637 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf57fe22-8606-4ec1-a301-1a33d1d3cb22" (UID: "cf57fe22-8606-4ec1-a301-1a33d1d3cb22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.592321 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.592360 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.592370 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.592380 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrq4q\" (UniqueName: \"kubernetes.io/projected/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-kube-api-access-mrq4q\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:25 crc kubenswrapper[4780]: I1205 08:22:25.592390 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf57fe22-8606-4ec1-a301-1a33d1d3cb22-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:26 crc kubenswrapper[4780]: I1205 08:22:26.373431 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54c966dbdf-p4mwh" Dec 05 08:22:26 crc kubenswrapper[4780]: I1205 08:22:26.403226 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54c966dbdf-p4mwh"] Dec 05 08:22:26 crc kubenswrapper[4780]: I1205 08:22:26.415540 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54c966dbdf-p4mwh"] Dec 05 08:22:27 crc kubenswrapper[4780]: I1205 08:22:27.827826 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:27 crc kubenswrapper[4780]: I1205 08:22:27.828279 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:28 crc kubenswrapper[4780]: I1205 08:22:28.149340 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf57fe22-8606-4ec1-a301-1a33d1d3cb22" path="/var/lib/kubelet/pods/cf57fe22-8606-4ec1-a301-1a33d1d3cb22/volumes" Dec 05 08:22:28 crc kubenswrapper[4780]: I1205 08:22:28.882806 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9fmf" podUID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerName="registry-server" probeResult="failure" output=< Dec 05 08:22:28 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Dec 05 08:22:28 crc kubenswrapper[4780]: > Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.010926 4780 scope.go:117] "RemoveContainer" containerID="4e265fd94940ef981df6e88fa57f9417577253e469c4daa89e7a46ae80739f8f" Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.070314 4780 scope.go:117] "RemoveContainer" containerID="177a6d11dbf916bc5a213be71e0fa8f74f807e6e93df99e5b13abf35b8af77db" Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.093355 4780 scope.go:117] "RemoveContainer" containerID="2381b03ee8fe1ead2704c8928e5177c3b472fc60557ceb7479b761bc2609d263" Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.140242 4780 scope.go:117] "RemoveContainer" containerID="4de9adec3f660ea4ee648b9800ae676f6fbe54b9d570f7d145962723f3a20668" Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.199664 4780 scope.go:117] "RemoveContainer" containerID="f77ee4ba9be3082c9bba20e309716e16d5dde91c3afc1f1f83d9897278314516" Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.233355 4780 scope.go:117] "RemoveContainer" containerID="c8155498f5450db309d81d7857e0bd505348d241b244901951a7e91190a8c862" Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.908138 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.908453 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.908506 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.909350 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"349e89f9246808105ec0ea65aea43e2605c8616dad9fd9a33f62e7fb3ca35e96"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 08:22:29 crc kubenswrapper[4780]: I1205 08:22:29.909411 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://349e89f9246808105ec0ea65aea43e2605c8616dad9fd9a33f62e7fb3ca35e96" gracePeriod=600 Dec 05 08:22:30 crc kubenswrapper[4780]: E1205 08:22:30.133521 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda640087b_e493_4ac1_bef1_a9c05dd7c0ad.slice/crio-conmon-349e89f9246808105ec0ea65aea43e2605c8616dad9fd9a33f62e7fb3ca35e96.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda640087b_e493_4ac1_bef1_a9c05dd7c0ad.slice/crio-349e89f9246808105ec0ea65aea43e2605c8616dad9fd9a33f62e7fb3ca35e96.scope\": RecentStats: unable to find data in memory cache]" Dec 05 08:22:30 crc kubenswrapper[4780]: I1205 08:22:30.412450 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="349e89f9246808105ec0ea65aea43e2605c8616dad9fd9a33f62e7fb3ca35e96" exitCode=0 Dec 05 08:22:30 crc kubenswrapper[4780]: I1205 08:22:30.412498 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"349e89f9246808105ec0ea65aea43e2605c8616dad9fd9a33f62e7fb3ca35e96"} Dec 05 08:22:30 crc kubenswrapper[4780]: I1205 08:22:30.413028 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690"} Dec 05 08:22:30 crc kubenswrapper[4780]: I1205 08:22:30.413085 4780 scope.go:117] "RemoveContainer" containerID="abb5297ab3fe941518686912fa7d6411fa190611129f4c0fa32093baecae6368" Dec 05 08:22:32 crc kubenswrapper[4780]: I1205 08:22:32.038012 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 08:22:32 crc kubenswrapper[4780]: I1205 08:22:32.038926 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 08:22:33 crc kubenswrapper[4780]: I1205 08:22:33.055063 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.94:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 08:22:33 crc kubenswrapper[4780]: I1205 08:22:33.055081 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 08:22:37 crc kubenswrapper[4780]: I1205 08:22:37.876660 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:37 crc kubenswrapper[4780]: I1205 08:22:37.932484 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:38 crc kubenswrapper[4780]: I1205 08:22:38.115004 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9fmf"] Dec 05 08:22:39 crc kubenswrapper[4780]: I1205 08:22:39.521184 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z9fmf" podUID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerName="registry-server" containerID="cri-o://87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a" gracePeriod=2 Dec 05 08:22:39 crc kubenswrapper[4780]: I1205 08:22:39.986555 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.085699 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-catalog-content\") pod \"916c5ee9-c74a-4163-a4af-f0b875616e84\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.085946 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s48qz\" (UniqueName: \"kubernetes.io/projected/916c5ee9-c74a-4163-a4af-f0b875616e84-kube-api-access-s48qz\") pod \"916c5ee9-c74a-4163-a4af-f0b875616e84\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.086142 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-utilities\") pod \"916c5ee9-c74a-4163-a4af-f0b875616e84\" (UID: \"916c5ee9-c74a-4163-a4af-f0b875616e84\") " Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.086846 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-utilities" (OuterVolumeSpecName: "utilities") pod "916c5ee9-c74a-4163-a4af-f0b875616e84" (UID: "916c5ee9-c74a-4163-a4af-f0b875616e84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.091310 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916c5ee9-c74a-4163-a4af-f0b875616e84-kube-api-access-s48qz" (OuterVolumeSpecName: "kube-api-access-s48qz") pod "916c5ee9-c74a-4163-a4af-f0b875616e84" (UID: "916c5ee9-c74a-4163-a4af-f0b875616e84"). InnerVolumeSpecName "kube-api-access-s48qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.188396 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s48qz\" (UniqueName: \"kubernetes.io/projected/916c5ee9-c74a-4163-a4af-f0b875616e84-kube-api-access-s48qz\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.188455 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.201427 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "916c5ee9-c74a-4163-a4af-f0b875616e84" (UID: "916c5ee9-c74a-4163-a4af-f0b875616e84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.290445 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916c5ee9-c74a-4163-a4af-f0b875616e84-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.540115 4780 generic.go:334] "Generic (PLEG): container finished" podID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerID="87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a" exitCode=0 Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.540187 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9fmf" event={"ID":"916c5ee9-c74a-4163-a4af-f0b875616e84","Type":"ContainerDied","Data":"87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a"} Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.540230 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9fmf" event={"ID":"916c5ee9-c74a-4163-a4af-f0b875616e84","Type":"ContainerDied","Data":"7452ed9473ce3b82a62a180556e7900aeea74c914815047125dace13126f061b"} Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.540278 4780 scope.go:117] "RemoveContainer" containerID="87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.540570 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9fmf" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.565901 4780 scope.go:117] "RemoveContainer" containerID="7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.600943 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9fmf"] Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.602388 4780 scope.go:117] "RemoveContainer" containerID="867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.612174 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z9fmf"] Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.645137 4780 scope.go:117] "RemoveContainer" containerID="87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a" Dec 05 08:22:40 crc kubenswrapper[4780]: E1205 08:22:40.645635 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a\": container with ID starting with 87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a not found: ID does not exist" containerID="87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.645692 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a"} err="failed to get container status \"87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a\": rpc error: code = NotFound desc = could not find container \"87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a\": container with ID starting with 87220c315a79b2c9441120d9601a2a18f9d75d05c6bdd45d6e39ffc9b5bbaa6a not found: ID does not exist" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.645761 4780 scope.go:117] "RemoveContainer" containerID="7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8" Dec 05 08:22:40 crc kubenswrapper[4780]: E1205 08:22:40.646152 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8\": container with ID starting with 7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8 not found: ID does not exist" containerID="7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.646187 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8"} err="failed to get container status \"7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8\": rpc error: code = NotFound desc = could not find container \"7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8\": container with ID starting with 7e6823845e8893b4617d00c1cd5cd2d5a55316003c435098fdd79d1dbc34b9a8 not found: ID does not exist" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.646209 4780 scope.go:117] "RemoveContainer" containerID="867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e" Dec 05 08:22:40 crc kubenswrapper[4780]: E1205 08:22:40.646540 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e\": container with ID starting with 867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e not found: ID does not exist" containerID="867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e" Dec 05 08:22:40 crc kubenswrapper[4780]: I1205 08:22:40.646560 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e"} err="failed to get container status \"867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e\": rpc error: code = NotFound desc = could not find container \"867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e\": container with ID starting with 867ecda50446e9f5ecd794496165742cd234a9b0c644ab436638d059ead9971e not found: ID does not exist" Dec 05 08:22:42 crc kubenswrapper[4780]: I1205 08:22:42.043906 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 08:22:42 crc kubenswrapper[4780]: I1205 08:22:42.044290 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 08:22:42 crc kubenswrapper[4780]: I1205 08:22:42.044609 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 08:22:42 crc kubenswrapper[4780]: I1205 08:22:42.044634 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 08:22:42 crc kubenswrapper[4780]: I1205 08:22:42.049299 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 08:22:42 crc kubenswrapper[4780]: I1205 08:22:42.049700 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 08:22:42 crc kubenswrapper[4780]: I1205 08:22:42.178051 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916c5ee9-c74a-4163-a4af-f0b875616e84" path="/var/lib/kubelet/pods/916c5ee9-c74a-4163-a4af-f0b875616e84/volumes" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.790233 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69f69648fc-b9499"] Dec 05 08:22:53 crc kubenswrapper[4780]: E1205 08:22:53.805401 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf57fe22-8606-4ec1-a301-1a33d1d3cb22" containerName="dnsmasq-dns" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.805437 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf57fe22-8606-4ec1-a301-1a33d1d3cb22" containerName="dnsmasq-dns" Dec 05 08:22:53 crc kubenswrapper[4780]: E1205 08:22:53.805457 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf57fe22-8606-4ec1-a301-1a33d1d3cb22" containerName="init" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.805465 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf57fe22-8606-4ec1-a301-1a33d1d3cb22" containerName="init" Dec 05 08:22:53 crc kubenswrapper[4780]: E1205 08:22:53.805483 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerName="extract-utilities" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.805491 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerName="extract-utilities" Dec 05 08:22:53 crc kubenswrapper[4780]: E1205 08:22:53.805502 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerName="registry-server" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.805508 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerName="registry-server" Dec 05 08:22:53 crc kubenswrapper[4780]: E1205 08:22:53.805530 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerName="extract-content" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.805537 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerName="extract-content" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.805803 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf57fe22-8606-4ec1-a301-1a33d1d3cb22" containerName="dnsmasq-dns" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.805832 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="916c5ee9-c74a-4163-a4af-f0b875616e84" containerName="registry-server" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.806750 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69f69648fc-b9499"] Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.806839 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.811703 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.811862 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-mzstd" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.812008 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.812101 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.848505 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.848857 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8ce2c4c7-d952-41e3-af8b-7446f9435571" containerName="glance-log" containerID="cri-o://43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1" gracePeriod=30 Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.849694 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8ce2c4c7-d952-41e3-af8b-7446f9435571" containerName="glance-httpd" containerID="cri-o://38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1" gracePeriod=30 Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.941102 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bdb6c6645-jctkk"] Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.943056 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.952752 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-config-data\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.952843 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-scripts\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.952943 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vscld\" (UniqueName: \"kubernetes.io/projected/d1f778cc-f0ab-4209-a99f-24455af7731d-kube-api-access-vscld\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.952987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f778cc-f0ab-4209-a99f-24455af7731d-logs\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.953018 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1f778cc-f0ab-4209-a99f-24455af7731d-horizon-secret-key\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:53 crc kubenswrapper[4780]: I1205 08:22:53.991780 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bdb6c6645-jctkk"] Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.001515 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.001842 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" containerName="glance-log" containerID="cri-o://0f16f42d27f57a64bf0c7fd113418e5c0b643233359e1f0a997ea035d28b6e5f" gracePeriod=30 Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.002023 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" containerName="glance-httpd" containerID="cri-o://7eb585f00e9cf3bca639d42415cfb25ab60b1feb49f5cc0f752f9a4756d0995c" gracePeriod=30 Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.055074 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-logs\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.055145 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vscld\" (UniqueName: \"kubernetes.io/projected/d1f778cc-f0ab-4209-a99f-24455af7731d-kube-api-access-vscld\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.055180 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f778cc-f0ab-4209-a99f-24455af7731d-logs\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.055204 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1f778cc-f0ab-4209-a99f-24455af7731d-horizon-secret-key\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.055221 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-horizon-secret-key\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.055255 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99brm\" (UniqueName: \"kubernetes.io/projected/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-kube-api-access-99brm\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.055285 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-scripts\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.055325 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-config-data\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.055390 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-config-data\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.055421 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-scripts\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.056289 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-scripts\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.058459 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f778cc-f0ab-4209-a99f-24455af7731d-logs\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.059531 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-config-data\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.061528 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1f778cc-f0ab-4209-a99f-24455af7731d-horizon-secret-key\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.073126 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vscld\" (UniqueName: \"kubernetes.io/projected/d1f778cc-f0ab-4209-a99f-24455af7731d-kube-api-access-vscld\") pod \"horizon-69f69648fc-b9499\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.144308 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.164864 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-config-data\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.164969 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-logs\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.165035 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-horizon-secret-key\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.165081 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99brm\" (UniqueName: \"kubernetes.io/projected/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-kube-api-access-99brm\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.165111 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-scripts\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.166063 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-scripts\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.166331 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-logs\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.167231 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-config-data\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.174412 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-horizon-secret-key\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.183738 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99brm\" (UniqueName: \"kubernetes.io/projected/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-kube-api-access-99brm\") pod \"horizon-7bdb6c6645-jctkk\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.316553 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.643091 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69f69648fc-b9499"] Dec 05 08:22:54 crc kubenswrapper[4780]: W1205 08:22:54.645486 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1f778cc_f0ab_4209_a99f_24455af7731d.slice/crio-57b6c62dc32d498e4508a49df01d06e36ba4dbaa2f6b747c0d4f982838af2257 WatchSource:0}: Error finding container 57b6c62dc32d498e4508a49df01d06e36ba4dbaa2f6b747c0d4f982838af2257: Status 404 returned error can't find the container with id 57b6c62dc32d498e4508a49df01d06e36ba4dbaa2f6b747c0d4f982838af2257 Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.664021 4780 generic.go:334] "Generic (PLEG): container finished" podID="8ce2c4c7-d952-41e3-af8b-7446f9435571" containerID="43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1" exitCode=143 Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.664099 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ce2c4c7-d952-41e3-af8b-7446f9435571","Type":"ContainerDied","Data":"43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1"} Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.669194 4780 generic.go:334] "Generic (PLEG): container finished" podID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" containerID="0f16f42d27f57a64bf0c7fd113418e5c0b643233359e1f0a997ea035d28b6e5f" exitCode=143 Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.669282 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6d986d6-1f60-4d57-aad6-82764a80ce9c","Type":"ContainerDied","Data":"0f16f42d27f57a64bf0c7fd113418e5c0b643233359e1f0a997ea035d28b6e5f"} Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.671043 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f69648fc-b9499" event={"ID":"d1f778cc-f0ab-4209-a99f-24455af7731d","Type":"ContainerStarted","Data":"57b6c62dc32d498e4508a49df01d06e36ba4dbaa2f6b747c0d4f982838af2257"} Dec 05 08:22:54 crc kubenswrapper[4780]: I1205 08:22:54.796969 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bdb6c6645-jctkk"] Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.688290 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69f69648fc-b9499"] Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.700109 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb6c6645-jctkk" event={"ID":"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50","Type":"ContainerStarted","Data":"a7789774690df6a3bad7eb8452635084c19b7bd6f691783f91bc0060d4ae0694"} Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.720412 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f5f688c4-5drqh"] Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.722203 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.725484 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.744879 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f5f688c4-5drqh"] Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.803661 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-secret-key\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.803697 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-combined-ca-bundle\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.803735 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8rg\" (UniqueName: \"kubernetes.io/projected/794926d8-eba5-44ce-a03f-ebc119f61dde-kube-api-access-zc8rg\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.803769 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-scripts\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.803807 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-config-data\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.804002 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-tls-certs\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.804064 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/794926d8-eba5-44ce-a03f-ebc119f61dde-logs\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.812581 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bdb6c6645-jctkk"] Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.846298 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f88df4d7b-87h9z"] Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.852827 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.856883 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f88df4d7b-87h9z"] Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.905812 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-tls-certs\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.906295 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/794926d8-eba5-44ce-a03f-ebc119f61dde-logs\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.906461 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-secret-key\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.906617 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-combined-ca-bundle\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.906701 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/794926d8-eba5-44ce-a03f-ebc119f61dde-logs\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.906795 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc8rg\" (UniqueName: \"kubernetes.io/projected/794926d8-eba5-44ce-a03f-ebc119f61dde-kube-api-access-zc8rg\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.906972 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-scripts\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.907103 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-config-data\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.907659 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-scripts\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.908885 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-config-data\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.912193 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-tls-certs\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.912193 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-combined-ca-bundle\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.913377 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-secret-key\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:55 crc kubenswrapper[4780]: I1205 08:22:55.926545 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc8rg\" (UniqueName: \"kubernetes.io/projected/794926d8-eba5-44ce-a03f-ebc119f61dde-kube-api-access-zc8rg\") pod \"horizon-6f5f688c4-5drqh\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.008751 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-combined-ca-bundle\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.008798 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-scripts\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.008827 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-config-data\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.008863 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-secret-key\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.008941 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-tls-certs\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.008966 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-888ph\" (UniqueName: \"kubernetes.io/projected/f2c00068-c308-4340-9d9a-58430981cadf-kube-api-access-888ph\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.008997 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2c00068-c308-4340-9d9a-58430981cadf-logs\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.055493 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.110653 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-secret-key\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.110769 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-tls-certs\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.110811 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-888ph\" (UniqueName: \"kubernetes.io/projected/f2c00068-c308-4340-9d9a-58430981cadf-kube-api-access-888ph\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.110869 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2c00068-c308-4340-9d9a-58430981cadf-logs\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.111000 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-scripts\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.111020 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-combined-ca-bundle\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.111054 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-config-data\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.111639 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2c00068-c308-4340-9d9a-58430981cadf-logs\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.112526 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-config-data\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.112812 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-scripts\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.114508 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-tls-certs\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.115189 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-secret-key\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.116921 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-combined-ca-bundle\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.130661 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-888ph\" (UniqueName: \"kubernetes.io/projected/f2c00068-c308-4340-9d9a-58430981cadf-kube-api-access-888ph\") pod \"horizon-f88df4d7b-87h9z\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.184026 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.511057 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f5f688c4-5drqh"] Dec 05 08:22:56 crc kubenswrapper[4780]: W1205 08:22:56.511743 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod794926d8_eba5_44ce_a03f_ebc119f61dde.slice/crio-2bd7bab611d45cb90046dacbd7166a070d96ac4fe306a270ed6c556a788789ef WatchSource:0}: Error finding container 2bd7bab611d45cb90046dacbd7166a070d96ac4fe306a270ed6c556a788789ef: Status 404 returned error can't find the container with id 2bd7bab611d45cb90046dacbd7166a070d96ac4fe306a270ed6c556a788789ef Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.663046 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f88df4d7b-87h9z"] Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.710037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f5f688c4-5drqh" event={"ID":"794926d8-eba5-44ce-a03f-ebc119f61dde","Type":"ContainerStarted","Data":"2bd7bab611d45cb90046dacbd7166a070d96ac4fe306a270ed6c556a788789ef"} Dec 05 08:22:56 crc kubenswrapper[4780]: I1205 08:22:56.711458 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f88df4d7b-87h9z" event={"ID":"f2c00068-c308-4340-9d9a-58430981cadf","Type":"ContainerStarted","Data":"48ffb3da5787f550ced9b03cd27c9a8725e16f09976fd42f54b8bee872a66038"} Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.712317 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.743291 4780 generic.go:334] "Generic (PLEG): container finished" podID="8ce2c4c7-d952-41e3-af8b-7446f9435571" containerID="38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1" exitCode=0 Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.743362 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ce2c4c7-d952-41e3-af8b-7446f9435571","Type":"ContainerDied","Data":"38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1"} Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.743387 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ce2c4c7-d952-41e3-af8b-7446f9435571","Type":"ContainerDied","Data":"b148ed82d64ceeb58ce069e9fbdcf184251986429ae3cd3077ab102d1f3685bc"} Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.743403 4780 scope.go:117] "RemoveContainer" containerID="38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.747455 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.756156 4780 generic.go:334] "Generic (PLEG): container finished" podID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" containerID="7eb585f00e9cf3bca639d42415cfb25ab60b1feb49f5cc0f752f9a4756d0995c" exitCode=0 Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.756202 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6d986d6-1f60-4d57-aad6-82764a80ce9c","Type":"ContainerDied","Data":"7eb585f00e9cf3bca639d42415cfb25ab60b1feb49f5cc0f752f9a4756d0995c"} Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.811717 4780 scope.go:117] "RemoveContainer" containerID="43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.841808 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.854123 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-scripts\") pod \"8ce2c4c7-d952-41e3-af8b-7446f9435571\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.854189 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-public-tls-certs\") pod \"8ce2c4c7-d952-41e3-af8b-7446f9435571\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.854222 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-httpd-run\") pod \"8ce2c4c7-d952-41e3-af8b-7446f9435571\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.854289 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-combined-ca-bundle\") pod \"8ce2c4c7-d952-41e3-af8b-7446f9435571\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.854450 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-config-data\") pod \"8ce2c4c7-d952-41e3-af8b-7446f9435571\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.854471 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8tbp\" (UniqueName: \"kubernetes.io/projected/8ce2c4c7-d952-41e3-af8b-7446f9435571-kube-api-access-f8tbp\") pod \"8ce2c4c7-d952-41e3-af8b-7446f9435571\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.855050 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8ce2c4c7-d952-41e3-af8b-7446f9435571" (UID: "8ce2c4c7-d952-41e3-af8b-7446f9435571"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.855465 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-logs" (OuterVolumeSpecName: "logs") pod "8ce2c4c7-d952-41e3-af8b-7446f9435571" (UID: "8ce2c4c7-d952-41e3-af8b-7446f9435571"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.856314 4780 scope.go:117] "RemoveContainer" containerID="38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1" Dec 05 08:22:57 crc kubenswrapper[4780]: E1205 08:22:57.858084 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1\": container with ID starting with 38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1 not found: ID does not exist" containerID="38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.858125 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1"} err="failed to get container status \"38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1\": rpc error: code = NotFound desc = could not find container \"38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1\": container with ID starting with 38f6aef6ae5522ed903567c4bd5b4f067c051ec56cb5b275767425fceb1cdcc1 not found: ID does not exist" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.858150 4780 scope.go:117] "RemoveContainer" containerID="43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.862590 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-scripts" (OuterVolumeSpecName: "scripts") pod "8ce2c4c7-d952-41e3-af8b-7446f9435571" (UID: "8ce2c4c7-d952-41e3-af8b-7446f9435571"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:57 crc kubenswrapper[4780]: E1205 08:22:57.862766 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1\": container with ID starting with 43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1 not found: ID does not exist" containerID="43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.863246 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1"} err="failed to get container status \"43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1\": rpc error: code = NotFound desc = could not find container \"43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1\": container with ID starting with 43da07b6b4039a085f93fd2fb0877b2f307c3d10a21be3b2430c43c63f76ccd1 not found: ID does not exist" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.863664 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-logs\") pod \"8ce2c4c7-d952-41e3-af8b-7446f9435571\" (UID: \"8ce2c4c7-d952-41e3-af8b-7446f9435571\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.866717 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.866760 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.866776 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ce2c4c7-d952-41e3-af8b-7446f9435571-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.897589 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce2c4c7-d952-41e3-af8b-7446f9435571-kube-api-access-f8tbp" (OuterVolumeSpecName: "kube-api-access-f8tbp") pod "8ce2c4c7-d952-41e3-af8b-7446f9435571" (UID: "8ce2c4c7-d952-41e3-af8b-7446f9435571"). InnerVolumeSpecName "kube-api-access-f8tbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.921419 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ce2c4c7-d952-41e3-af8b-7446f9435571" (UID: "8ce2c4c7-d952-41e3-af8b-7446f9435571"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.926883 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8ce2c4c7-d952-41e3-af8b-7446f9435571" (UID: "8ce2c4c7-d952-41e3-af8b-7446f9435571"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.968210 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9s54\" (UniqueName: \"kubernetes.io/projected/e6d986d6-1f60-4d57-aad6-82764a80ce9c-kube-api-access-b9s54\") pod \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.968333 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-httpd-run\") pod \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.968396 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-internal-tls-certs\") pod \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.968463 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-logs\") pod \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.968523 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-config-data\") pod \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.968547 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-combined-ca-bundle\") pod \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.968688 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-scripts\") pod \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\" (UID: \"e6d986d6-1f60-4d57-aad6-82764a80ce9c\") " Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.969190 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-logs" (OuterVolumeSpecName: "logs") pod "e6d986d6-1f60-4d57-aad6-82764a80ce9c" (UID: "e6d986d6-1f60-4d57-aad6-82764a80ce9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.969363 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.969386 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.969399 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8tbp\" (UniqueName: \"kubernetes.io/projected/8ce2c4c7-d952-41e3-af8b-7446f9435571-kube-api-access-f8tbp\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.969412 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.969653 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e6d986d6-1f60-4d57-aad6-82764a80ce9c" (UID: "e6d986d6-1f60-4d57-aad6-82764a80ce9c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.972121 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d986d6-1f60-4d57-aad6-82764a80ce9c-kube-api-access-b9s54" (OuterVolumeSpecName: "kube-api-access-b9s54") pod "e6d986d6-1f60-4d57-aad6-82764a80ce9c" (UID: "e6d986d6-1f60-4d57-aad6-82764a80ce9c"). InnerVolumeSpecName "kube-api-access-b9s54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.973077 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-scripts" (OuterVolumeSpecName: "scripts") pod "e6d986d6-1f60-4d57-aad6-82764a80ce9c" (UID: "e6d986d6-1f60-4d57-aad6-82764a80ce9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:57 crc kubenswrapper[4780]: I1205 08:22:57.976709 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-config-data" (OuterVolumeSpecName: "config-data") pod "8ce2c4c7-d952-41e3-af8b-7446f9435571" (UID: "8ce2c4c7-d952-41e3-af8b-7446f9435571"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.026145 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6d986d6-1f60-4d57-aad6-82764a80ce9c" (UID: "e6d986d6-1f60-4d57-aad6-82764a80ce9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.033250 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-config-data" (OuterVolumeSpecName: "config-data") pod "e6d986d6-1f60-4d57-aad6-82764a80ce9c" (UID: "e6d986d6-1f60-4d57-aad6-82764a80ce9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.047556 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e6d986d6-1f60-4d57-aad6-82764a80ce9c" (UID: "e6d986d6-1f60-4d57-aad6-82764a80ce9c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.071158 4780 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6d986d6-1f60-4d57-aad6-82764a80ce9c-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.071192 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.071203 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.071213 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.071221 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce2c4c7-d952-41e3-af8b-7446f9435571-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.071230 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d986d6-1f60-4d57-aad6-82764a80ce9c-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.071237 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9s54\" (UniqueName: \"kubernetes.io/projected/e6d986d6-1f60-4d57-aad6-82764a80ce9c-kube-api-access-b9s54\") on node \"crc\" DevicePath \"\"" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.163823 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.167728 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.180084 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:22:58 crc kubenswrapper[4780]: E1205 08:22:58.180523 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" containerName="glance-httpd" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.180545 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" containerName="glance-httpd" Dec 05 08:22:58 crc kubenswrapper[4780]: E1205 08:22:58.180564 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" containerName="glance-log" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.180573 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" containerName="glance-log" Dec 05 08:22:58 crc kubenswrapper[4780]: E1205 08:22:58.180591 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce2c4c7-d952-41e3-af8b-7446f9435571" containerName="glance-log" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.180598 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce2c4c7-d952-41e3-af8b-7446f9435571" containerName="glance-log" Dec 05 08:22:58 crc kubenswrapper[4780]: E1205 08:22:58.180621 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce2c4c7-d952-41e3-af8b-7446f9435571" containerName="glance-httpd" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.180628 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce2c4c7-d952-41e3-af8b-7446f9435571" containerName="glance-httpd" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.180882 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" containerName="glance-log" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.180929 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce2c4c7-d952-41e3-af8b-7446f9435571" containerName="glance-httpd" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.180949 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce2c4c7-d952-41e3-af8b-7446f9435571" containerName="glance-log" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.180961 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" containerName="glance-httpd" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.182263 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.189402 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.189661 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.205910 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.275990 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-config-data\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.276088 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.276120 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f46ee8ba-7661-4f55-a82c-c02c35272b58-logs\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.276149 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f46ee8ba-7661-4f55-a82c-c02c35272b58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.276224 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-scripts\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.276264 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.276282 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqt6w\" (UniqueName: \"kubernetes.io/projected/f46ee8ba-7661-4f55-a82c-c02c35272b58-kube-api-access-cqt6w\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.377534 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-config-data\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.377627 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.377655 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f46ee8ba-7661-4f55-a82c-c02c35272b58-logs\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.377684 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f46ee8ba-7661-4f55-a82c-c02c35272b58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.377718 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-scripts\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.377762 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.377780 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqt6w\" (UniqueName: \"kubernetes.io/projected/f46ee8ba-7661-4f55-a82c-c02c35272b58-kube-api-access-cqt6w\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.378682 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f46ee8ba-7661-4f55-a82c-c02c35272b58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.379449 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f46ee8ba-7661-4f55-a82c-c02c35272b58-logs\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.385159 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.385196 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-scripts\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.385369 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.391466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f46ee8ba-7661-4f55-a82c-c02c35272b58-config-data\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.400726 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqt6w\" (UniqueName: \"kubernetes.io/projected/f46ee8ba-7661-4f55-a82c-c02c35272b58-kube-api-access-cqt6w\") pod \"glance-default-external-api-0\" (UID: \"f46ee8ba-7661-4f55-a82c-c02c35272b58\") " pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.524339 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.789916 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6d986d6-1f60-4d57-aad6-82764a80ce9c","Type":"ContainerDied","Data":"c6bd56efee2b63bd2b31ab7fb72c348a7a97395f52121a6ff174e2a62c57feb4"} Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.789996 4780 scope.go:117] "RemoveContainer" containerID="7eb585f00e9cf3bca639d42415cfb25ab60b1feb49f5cc0f752f9a4756d0995c" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.789993 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.822123 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.835482 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.878629 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.878791 4780 scope.go:117] "RemoveContainer" containerID="0f16f42d27f57a64bf0c7fd113418e5c0b643233359e1f0a997ea035d28b6e5f" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.880421 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.884824 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.886138 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 08:22:58 crc kubenswrapper[4780]: I1205 08:22:58.894735 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.010309 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-scripts\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.010387 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2btb\" (UniqueName: \"kubernetes.io/projected/98d6811c-d6a8-4641-9b0d-d0b977125526-kube-api-access-b2btb\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.010552 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.010590 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-config-data\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.010649 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98d6811c-d6a8-4641-9b0d-d0b977125526-logs\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.010712 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98d6811c-d6a8-4641-9b0d-d0b977125526-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.010741 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.112832 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-config-data\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.113112 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98d6811c-d6a8-4641-9b0d-d0b977125526-logs\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.113250 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98d6811c-d6a8-4641-9b0d-d0b977125526-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.113287 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.113474 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-scripts\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.113552 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2btb\" (UniqueName: \"kubernetes.io/projected/98d6811c-d6a8-4641-9b0d-d0b977125526-kube-api-access-b2btb\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.113835 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.114347 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98d6811c-d6a8-4641-9b0d-d0b977125526-logs\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.114614 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98d6811c-d6a8-4641-9b0d-d0b977125526-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.119621 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-scripts\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.120279 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-config-data\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.126241 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.136910 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d6811c-d6a8-4641-9b0d-d0b977125526-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.137486 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2btb\" (UniqueName: \"kubernetes.io/projected/98d6811c-d6a8-4641-9b0d-d0b977125526-kube-api-access-b2btb\") pod \"glance-default-internal-api-0\" (UID: \"98d6811c-d6a8-4641-9b0d-d0b977125526\") " pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.183139 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 08:22:59 crc kubenswrapper[4780]: W1205 08:22:59.184326 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf46ee8ba_7661_4f55_a82c_c02c35272b58.slice/crio-381cb01db2cdb0d1a0e55666e681dddd0a1771e2aeb5128a20047369553388a0 WatchSource:0}: Error finding container 381cb01db2cdb0d1a0e55666e681dddd0a1771e2aeb5128a20047369553388a0: Status 404 returned error can't find the container with id 381cb01db2cdb0d1a0e55666e681dddd0a1771e2aeb5128a20047369553388a0 Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.213161 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.795718 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 08:22:59 crc kubenswrapper[4780]: I1205 08:22:59.818141 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f46ee8ba-7661-4f55-a82c-c02c35272b58","Type":"ContainerStarted","Data":"381cb01db2cdb0d1a0e55666e681dddd0a1771e2aeb5128a20047369553388a0"} Dec 05 08:23:00 crc kubenswrapper[4780]: I1205 08:23:00.159319 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ce2c4c7-d952-41e3-af8b-7446f9435571" path="/var/lib/kubelet/pods/8ce2c4c7-d952-41e3-af8b-7446f9435571/volumes" Dec 05 08:23:00 crc kubenswrapper[4780]: I1205 08:23:00.160545 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d986d6-1f60-4d57-aad6-82764a80ce9c" path="/var/lib/kubelet/pods/e6d986d6-1f60-4d57-aad6-82764a80ce9c/volumes" Dec 05 08:23:00 crc kubenswrapper[4780]: I1205 08:23:00.830789 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f46ee8ba-7661-4f55-a82c-c02c35272b58","Type":"ContainerStarted","Data":"6002d00ab7e1fa53b1da8d23ba5c7ad6153e64d297f5d247521e79aa8afc5cc2"} Dec 05 08:23:04 crc kubenswrapper[4780]: W1205 08:23:04.090165 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98d6811c_d6a8_4641_9b0d_d0b977125526.slice/crio-f0303df1646efb4d31e3a81749ea53102ad30312eee4d7d5896022bb544025ca WatchSource:0}: Error finding container f0303df1646efb4d31e3a81749ea53102ad30312eee4d7d5896022bb544025ca: Status 404 returned error can't find the container with id f0303df1646efb4d31e3a81749ea53102ad30312eee4d7d5896022bb544025ca Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.880392 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f46ee8ba-7661-4f55-a82c-c02c35272b58","Type":"ContainerStarted","Data":"eaf674ab963da2a74ea22c1e7336caa91a336bc222a7c017a65b0b55f4f7d6eb"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.885246 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"98d6811c-d6a8-4641-9b0d-d0b977125526","Type":"ContainerStarted","Data":"daebdb0edd19765dd1bfd0f6803e25b3acd0b75691962c36a4208b7ee36f5551"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.885294 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"98d6811c-d6a8-4641-9b0d-d0b977125526","Type":"ContainerStarted","Data":"f0303df1646efb4d31e3a81749ea53102ad30312eee4d7d5896022bb544025ca"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.887798 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f5f688c4-5drqh" event={"ID":"794926d8-eba5-44ce-a03f-ebc119f61dde","Type":"ContainerStarted","Data":"4c024693137c09ce3bf01f0fc53ab53620a418f8e19134aa51f36cbb778e758a"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.887833 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f5f688c4-5drqh" event={"ID":"794926d8-eba5-44ce-a03f-ebc119f61dde","Type":"ContainerStarted","Data":"d83ecc3b2bb4d5a94fde96d0ec0406351c8e3665540674ca09e2db0c5baf741f"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.891868 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb6c6645-jctkk" event={"ID":"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50","Type":"ContainerStarted","Data":"5a989623122cac4ac838a95bbc388ab706336b1f322cfafffc39f906d432b129"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.891928 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb6c6645-jctkk" event={"ID":"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50","Type":"ContainerStarted","Data":"a3d99244ea8d53b0b4b88a84424414d0ba27a65c63a2a9cbf0e5c7b5f03cf4a2"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.892037 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bdb6c6645-jctkk" podUID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" containerName="horizon-log" containerID="cri-o://a3d99244ea8d53b0b4b88a84424414d0ba27a65c63a2a9cbf0e5c7b5f03cf4a2" gracePeriod=30 Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.892125 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bdb6c6645-jctkk" podUID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" containerName="horizon" containerID="cri-o://5a989623122cac4ac838a95bbc388ab706336b1f322cfafffc39f906d432b129" gracePeriod=30 Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.896378 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f69648fc-b9499" event={"ID":"d1f778cc-f0ab-4209-a99f-24455af7731d","Type":"ContainerStarted","Data":"22c8bbeda84456e6ab6c08b7afde1c5c7f8ca12fa7d590d742cb62e2c211e8a9"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.896428 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f69648fc-b9499" event={"ID":"d1f778cc-f0ab-4209-a99f-24455af7731d","Type":"ContainerStarted","Data":"a93590f7c9805144647f0658a53de54132199f13ecedf29664be21e77b6bf2be"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.896568 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69f69648fc-b9499" podUID="d1f778cc-f0ab-4209-a99f-24455af7731d" containerName="horizon-log" containerID="cri-o://a93590f7c9805144647f0658a53de54132199f13ecedf29664be21e77b6bf2be" gracePeriod=30 Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.896642 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69f69648fc-b9499" podUID="d1f778cc-f0ab-4209-a99f-24455af7731d" containerName="horizon" containerID="cri-o://22c8bbeda84456e6ab6c08b7afde1c5c7f8ca12fa7d590d742cb62e2c211e8a9" gracePeriod=30 Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.914293 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f88df4d7b-87h9z" event={"ID":"f2c00068-c308-4340-9d9a-58430981cadf","Type":"ContainerStarted","Data":"d9cdc66c27c98e8dd4b2bef0fe45e80d3d8b34eabf923e201dfa07f535eccde2"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.914476 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f88df4d7b-87h9z" event={"ID":"f2c00068-c308-4340-9d9a-58430981cadf","Type":"ContainerStarted","Data":"5303de4171d90ef2e51e37ab70c95e47fdbd5177f692ed0623b5c07fdaa8883d"} Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.917472 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.91745363 podStartE2EDuration="6.91745363s" podCreationTimestamp="2025-12-05 08:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:23:04.907106259 +0000 UTC m=+5818.976622601" watchObservedRunningTime="2025-12-05 08:23:04.91745363 +0000 UTC m=+5818.986969962" Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.936687 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69f69648fc-b9499" podStartSLOduration=2.337691521 podStartE2EDuration="11.936666323s" podCreationTimestamp="2025-12-05 08:22:53 +0000 UTC" firstStartedPulling="2025-12-05 08:22:54.647707504 +0000 UTC m=+5808.717223836" lastFinishedPulling="2025-12-05 08:23:04.246682306 +0000 UTC m=+5818.316198638" observedRunningTime="2025-12-05 08:23:04.932078158 +0000 UTC m=+5819.001594510" watchObservedRunningTime="2025-12-05 08:23:04.936666323 +0000 UTC m=+5819.006182655" Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.955113 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bdb6c6645-jctkk" podStartSLOduration=2.454640212 podStartE2EDuration="11.955092174s" podCreationTimestamp="2025-12-05 08:22:53 +0000 UTC" firstStartedPulling="2025-12-05 08:22:54.802117064 +0000 UTC m=+5808.871633396" lastFinishedPulling="2025-12-05 08:23:04.302569026 +0000 UTC m=+5818.372085358" observedRunningTime="2025-12-05 08:23:04.95312062 +0000 UTC m=+5819.022636972" watchObservedRunningTime="2025-12-05 08:23:04.955092174 +0000 UTC m=+5819.024608526" Dec 05 08:23:04 crc kubenswrapper[4780]: I1205 08:23:04.980827 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f5f688c4-5drqh" podStartSLOduration=2.198676208 podStartE2EDuration="9.980805273s" podCreationTimestamp="2025-12-05 08:22:55 +0000 UTC" firstStartedPulling="2025-12-05 08:22:56.519427613 +0000 UTC m=+5810.588943945" lastFinishedPulling="2025-12-05 08:23:04.301556678 +0000 UTC m=+5818.371073010" observedRunningTime="2025-12-05 08:23:04.974165353 +0000 UTC m=+5819.043681685" watchObservedRunningTime="2025-12-05 08:23:04.980805273 +0000 UTC m=+5819.050321605" Dec 05 08:23:05 crc kubenswrapper[4780]: I1205 08:23:05.926321 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"98d6811c-d6a8-4641-9b0d-d0b977125526","Type":"ContainerStarted","Data":"4113134be0583aaace2565c925f7def599e619dcef0288de85c7adc58d0bb8fb"} Dec 05 08:23:05 crc kubenswrapper[4780]: I1205 08:23:05.959004 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.958980518 podStartE2EDuration="7.958980518s" podCreationTimestamp="2025-12-05 08:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:23:05.947954008 +0000 UTC m=+5820.017470350" watchObservedRunningTime="2025-12-05 08:23:05.958980518 +0000 UTC m=+5820.028496850" Dec 05 08:23:05 crc kubenswrapper[4780]: I1205 08:23:05.960541 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f88df4d7b-87h9z" podStartSLOduration=3.384958432 podStartE2EDuration="10.96053133s" podCreationTimestamp="2025-12-05 08:22:55 +0000 UTC" firstStartedPulling="2025-12-05 08:22:56.670899732 +0000 UTC m=+5810.740416064" lastFinishedPulling="2025-12-05 08:23:04.24647263 +0000 UTC m=+5818.315988962" observedRunningTime="2025-12-05 08:23:04.997398044 +0000 UTC m=+5819.066914406" watchObservedRunningTime="2025-12-05 08:23:05.96053133 +0000 UTC m=+5820.030047672" Dec 05 08:23:06 crc kubenswrapper[4780]: I1205 08:23:06.056745 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:23:06 crc kubenswrapper[4780]: I1205 08:23:06.057020 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:23:06 crc kubenswrapper[4780]: I1205 08:23:06.184230 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:23:06 crc kubenswrapper[4780]: I1205 08:23:06.184280 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:23:08 crc kubenswrapper[4780]: I1205 08:23:08.527185 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 08:23:08 crc kubenswrapper[4780]: I1205 08:23:08.527959 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 08:23:08 crc kubenswrapper[4780]: I1205 08:23:08.561409 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 08:23:08 crc kubenswrapper[4780]: I1205 08:23:08.580264 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 08:23:08 crc kubenswrapper[4780]: I1205 08:23:08.954831 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 08:23:08 crc kubenswrapper[4780]: I1205 08:23:08.954877 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 08:23:09 crc kubenswrapper[4780]: I1205 08:23:09.214234 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 08:23:09 crc kubenswrapper[4780]: I1205 08:23:09.214313 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 08:23:09 crc kubenswrapper[4780]: I1205 08:23:09.249510 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 08:23:09 crc kubenswrapper[4780]: I1205 08:23:09.262024 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 08:23:09 crc kubenswrapper[4780]: I1205 08:23:09.961662 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 08:23:09 crc kubenswrapper[4780]: I1205 08:23:09.962536 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 08:23:10 crc kubenswrapper[4780]: I1205 08:23:10.969937 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 08:23:11 crc kubenswrapper[4780]: I1205 08:23:11.073718 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 08:23:11 crc kubenswrapper[4780]: I1205 08:23:11.074635 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 08:23:11 crc kubenswrapper[4780]: I1205 08:23:11.977039 4780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 08:23:12 crc kubenswrapper[4780]: I1205 08:23:12.099251 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 08:23:12 crc kubenswrapper[4780]: I1205 08:23:12.957959 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 08:23:14 crc kubenswrapper[4780]: I1205 08:23:14.149558 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:23:14 crc kubenswrapper[4780]: I1205 08:23:14.317191 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:23:16 crc kubenswrapper[4780]: I1205 08:23:16.059503 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f5f688c4-5drqh" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.97:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.97:8443: connect: connection refused" Dec 05 08:23:16 crc kubenswrapper[4780]: I1205 08:23:16.186299 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-f88df4d7b-87h9z" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.98:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.98:8443: connect: connection refused" Dec 05 08:23:28 crc kubenswrapper[4780]: I1205 08:23:28.059615 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:23:28 crc kubenswrapper[4780]: I1205 08:23:28.067355 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:23:29 crc kubenswrapper[4780]: I1205 08:23:29.431912 4780 scope.go:117] "RemoveContainer" containerID="40a48118eaeb20ea32c39b619db2fb7c593b0621d9bdd9cee19d65f38268fa3a" Dec 05 08:23:29 crc kubenswrapper[4780]: I1205 08:23:29.493391 4780 scope.go:117] "RemoveContainer" containerID="6e26139e0b7aa87aacc04368f0516752b07cd3432fc1fa2e6107726f08f2f7c0" Dec 05 08:23:29 crc kubenswrapper[4780]: I1205 08:23:29.878564 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:23:29 crc kubenswrapper[4780]: I1205 08:23:29.892082 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:23:29 crc kubenswrapper[4780]: I1205 08:23:29.977208 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f5f688c4-5drqh"] Dec 05 08:23:30 crc kubenswrapper[4780]: I1205 08:23:30.157324 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f5f688c4-5drqh" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon-log" containerID="cri-o://d83ecc3b2bb4d5a94fde96d0ec0406351c8e3665540674ca09e2db0c5baf741f" gracePeriod=30 Dec 05 08:23:30 crc kubenswrapper[4780]: I1205 08:23:30.157757 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f5f688c4-5drqh" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon" containerID="cri-o://4c024693137c09ce3bf01f0fc53ab53620a418f8e19134aa51f36cbb778e758a" gracePeriod=30 Dec 05 08:23:34 crc kubenswrapper[4780]: I1205 08:23:34.219146 4780 generic.go:334] "Generic (PLEG): container finished" podID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerID="4c024693137c09ce3bf01f0fc53ab53620a418f8e19134aa51f36cbb778e758a" exitCode=0 Dec 05 08:23:34 crc kubenswrapper[4780]: I1205 08:23:34.219288 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f5f688c4-5drqh" event={"ID":"794926d8-eba5-44ce-a03f-ebc119f61dde","Type":"ContainerDied","Data":"4c024693137c09ce3bf01f0fc53ab53620a418f8e19134aa51f36cbb778e758a"} Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.231029 4780 generic.go:334] "Generic (PLEG): container finished" podID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" containerID="5a989623122cac4ac838a95bbc388ab706336b1f322cfafffc39f906d432b129" exitCode=137 Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.231368 4780 generic.go:334] "Generic (PLEG): container finished" podID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" containerID="a3d99244ea8d53b0b4b88a84424414d0ba27a65c63a2a9cbf0e5c7b5f03cf4a2" exitCode=137 Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.231416 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb6c6645-jctkk" event={"ID":"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50","Type":"ContainerDied","Data":"5a989623122cac4ac838a95bbc388ab706336b1f322cfafffc39f906d432b129"} Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.231446 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb6c6645-jctkk" event={"ID":"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50","Type":"ContainerDied","Data":"a3d99244ea8d53b0b4b88a84424414d0ba27a65c63a2a9cbf0e5c7b5f03cf4a2"} Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.235153 4780 generic.go:334] "Generic (PLEG): container finished" podID="d1f778cc-f0ab-4209-a99f-24455af7731d" containerID="22c8bbeda84456e6ab6c08b7afde1c5c7f8ca12fa7d590d742cb62e2c211e8a9" exitCode=137 Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.235186 4780 generic.go:334] "Generic (PLEG): container finished" podID="d1f778cc-f0ab-4209-a99f-24455af7731d" containerID="a93590f7c9805144647f0658a53de54132199f13ecedf29664be21e77b6bf2be" exitCode=137 Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.235210 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f69648fc-b9499" event={"ID":"d1f778cc-f0ab-4209-a99f-24455af7731d","Type":"ContainerDied","Data":"22c8bbeda84456e6ab6c08b7afde1c5c7f8ca12fa7d590d742cb62e2c211e8a9"} Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.235244 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f69648fc-b9499" event={"ID":"d1f778cc-f0ab-4209-a99f-24455af7731d","Type":"ContainerDied","Data":"a93590f7c9805144647f0658a53de54132199f13ecedf29664be21e77b6bf2be"} Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.384679 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.398020 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.416433 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f778cc-f0ab-4209-a99f-24455af7731d-logs\") pod \"d1f778cc-f0ab-4209-a99f-24455af7731d\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.416929 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-config-data\") pod \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.416871 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f778cc-f0ab-4209-a99f-24455af7731d-logs" (OuterVolumeSpecName: "logs") pod "d1f778cc-f0ab-4209-a99f-24455af7731d" (UID: "d1f778cc-f0ab-4209-a99f-24455af7731d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.417687 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-scripts\") pod \"d1f778cc-f0ab-4209-a99f-24455af7731d\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.418014 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-horizon-secret-key\") pod \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.418057 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-config-data\") pod \"d1f778cc-f0ab-4209-a99f-24455af7731d\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.418173 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vscld\" (UniqueName: \"kubernetes.io/projected/d1f778cc-f0ab-4209-a99f-24455af7731d-kube-api-access-vscld\") pod \"d1f778cc-f0ab-4209-a99f-24455af7731d\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.418286 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-logs\") pod \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.418308 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99brm\" (UniqueName: \"kubernetes.io/projected/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-kube-api-access-99brm\") pod \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.418334 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-scripts\") pod \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\" (UID: \"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50\") " Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.418363 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1f778cc-f0ab-4209-a99f-24455af7731d-horizon-secret-key\") pod \"d1f778cc-f0ab-4209-a99f-24455af7731d\" (UID: \"d1f778cc-f0ab-4209-a99f-24455af7731d\") " Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.418772 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-logs" (OuterVolumeSpecName: "logs") pod "55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" (UID: "55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.419377 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.419397 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f778cc-f0ab-4209-a99f-24455af7731d-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.424028 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f778cc-f0ab-4209-a99f-24455af7731d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d1f778cc-f0ab-4209-a99f-24455af7731d" (UID: "d1f778cc-f0ab-4209-a99f-24455af7731d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.428165 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" (UID: "55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.431494 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f778cc-f0ab-4209-a99f-24455af7731d-kube-api-access-vscld" (OuterVolumeSpecName: "kube-api-access-vscld") pod "d1f778cc-f0ab-4209-a99f-24455af7731d" (UID: "d1f778cc-f0ab-4209-a99f-24455af7731d"). InnerVolumeSpecName "kube-api-access-vscld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.443171 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-kube-api-access-99brm" (OuterVolumeSpecName: "kube-api-access-99brm") pod "55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" (UID: "55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50"). InnerVolumeSpecName "kube-api-access-99brm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.449264 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-config-data" (OuterVolumeSpecName: "config-data") pod "55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" (UID: "55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.455239 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-scripts" (OuterVolumeSpecName: "scripts") pod "d1f778cc-f0ab-4209-a99f-24455af7731d" (UID: "d1f778cc-f0ab-4209-a99f-24455af7731d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.456822 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-config-data" (OuterVolumeSpecName: "config-data") pod "d1f778cc-f0ab-4209-a99f-24455af7731d" (UID: "d1f778cc-f0ab-4209-a99f-24455af7731d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.476263 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-scripts" (OuterVolumeSpecName: "scripts") pod "55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" (UID: "55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.520759 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vscld\" (UniqueName: \"kubernetes.io/projected/d1f778cc-f0ab-4209-a99f-24455af7731d-kube-api-access-vscld\") on node \"crc\" DevicePath \"\"" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.520797 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99brm\" (UniqueName: \"kubernetes.io/projected/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-kube-api-access-99brm\") on node \"crc\" DevicePath \"\"" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.520808 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.520818 4780 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1f778cc-f0ab-4209-a99f-24455af7731d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.520827 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.520835 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.520844 4780 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:23:35 crc kubenswrapper[4780]: I1205 08:23:35.520852 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1f778cc-f0ab-4209-a99f-24455af7731d-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.057089 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f5f688c4-5drqh" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.97:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.97:8443: connect: connection refused" Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.247208 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bdb6c6645-jctkk" event={"ID":"55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50","Type":"ContainerDied","Data":"a7789774690df6a3bad7eb8452635084c19b7bd6f691783f91bc0060d4ae0694"} Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.247283 4780 scope.go:117] "RemoveContainer" containerID="5a989623122cac4ac838a95bbc388ab706336b1f322cfafffc39f906d432b129" Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.247298 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bdb6c6645-jctkk" Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.249220 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f69648fc-b9499" event={"ID":"d1f778cc-f0ab-4209-a99f-24455af7731d","Type":"ContainerDied","Data":"57b6c62dc32d498e4508a49df01d06e36ba4dbaa2f6b747c0d4f982838af2257"} Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.249329 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f69648fc-b9499" Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.299920 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69f69648fc-b9499"] Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.308672 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69f69648fc-b9499"] Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.318183 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bdb6c6645-jctkk"] Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.326729 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bdb6c6645-jctkk"] Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.428802 4780 scope.go:117] "RemoveContainer" containerID="a3d99244ea8d53b0b4b88a84424414d0ba27a65c63a2a9cbf0e5c7b5f03cf4a2" Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.444650 4780 scope.go:117] "RemoveContainer" containerID="22c8bbeda84456e6ab6c08b7afde1c5c7f8ca12fa7d590d742cb62e2c211e8a9" Dec 05 08:23:36 crc kubenswrapper[4780]: I1205 08:23:36.602642 4780 scope.go:117] "RemoveContainer" containerID="a93590f7c9805144647f0658a53de54132199f13ecedf29664be21e77b6bf2be" Dec 05 08:23:38 crc kubenswrapper[4780]: I1205 08:23:38.148790 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" path="/var/lib/kubelet/pods/55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50/volumes" Dec 05 08:23:38 crc kubenswrapper[4780]: I1205 08:23:38.149690 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f778cc-f0ab-4209-a99f-24455af7731d" path="/var/lib/kubelet/pods/d1f778cc-f0ab-4209-a99f-24455af7731d/volumes" Dec 05 08:23:46 crc kubenswrapper[4780]: I1205 08:23:46.057375 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f5f688c4-5drqh" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.97:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.97:8443: connect: connection refused" Dec 05 08:23:56 crc kubenswrapper[4780]: I1205 08:23:56.057207 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f5f688c4-5drqh" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.97:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.97:8443: connect: connection refused" Dec 05 08:23:56 crc kubenswrapper[4780]: I1205 08:23:56.058595 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.502792 4780 generic.go:334] "Generic (PLEG): container finished" podID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerID="d83ecc3b2bb4d5a94fde96d0ec0406351c8e3665540674ca09e2db0c5baf741f" exitCode=137 Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.502911 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f5f688c4-5drqh" event={"ID":"794926d8-eba5-44ce-a03f-ebc119f61dde","Type":"ContainerDied","Data":"d83ecc3b2bb4d5a94fde96d0ec0406351c8e3665540674ca09e2db0c5baf741f"} Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.503352 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f5f688c4-5drqh" event={"ID":"794926d8-eba5-44ce-a03f-ebc119f61dde","Type":"ContainerDied","Data":"2bd7bab611d45cb90046dacbd7166a070d96ac4fe306a270ed6c556a788789ef"} Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.503368 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bd7bab611d45cb90046dacbd7166a070d96ac4fe306a270ed6c556a788789ef" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.553752 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.708450 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/794926d8-eba5-44ce-a03f-ebc119f61dde-logs\") pod \"794926d8-eba5-44ce-a03f-ebc119f61dde\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.708494 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-secret-key\") pod \"794926d8-eba5-44ce-a03f-ebc119f61dde\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.708552 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-tls-certs\") pod \"794926d8-eba5-44ce-a03f-ebc119f61dde\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.708576 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-scripts\") pod \"794926d8-eba5-44ce-a03f-ebc119f61dde\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.708599 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-config-data\") pod \"794926d8-eba5-44ce-a03f-ebc119f61dde\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.708753 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc8rg\" (UniqueName: \"kubernetes.io/projected/794926d8-eba5-44ce-a03f-ebc119f61dde-kube-api-access-zc8rg\") pod \"794926d8-eba5-44ce-a03f-ebc119f61dde\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.708920 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-combined-ca-bundle\") pod \"794926d8-eba5-44ce-a03f-ebc119f61dde\" (UID: \"794926d8-eba5-44ce-a03f-ebc119f61dde\") " Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.709190 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/794926d8-eba5-44ce-a03f-ebc119f61dde-logs" (OuterVolumeSpecName: "logs") pod "794926d8-eba5-44ce-a03f-ebc119f61dde" (UID: "794926d8-eba5-44ce-a03f-ebc119f61dde"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.709623 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/794926d8-eba5-44ce-a03f-ebc119f61dde-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.714651 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794926d8-eba5-44ce-a03f-ebc119f61dde-kube-api-access-zc8rg" (OuterVolumeSpecName: "kube-api-access-zc8rg") pod "794926d8-eba5-44ce-a03f-ebc119f61dde" (UID: "794926d8-eba5-44ce-a03f-ebc119f61dde"). InnerVolumeSpecName "kube-api-access-zc8rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.715224 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "794926d8-eba5-44ce-a03f-ebc119f61dde" (UID: "794926d8-eba5-44ce-a03f-ebc119f61dde"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.733588 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-scripts" (OuterVolumeSpecName: "scripts") pod "794926d8-eba5-44ce-a03f-ebc119f61dde" (UID: "794926d8-eba5-44ce-a03f-ebc119f61dde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.734605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-config-data" (OuterVolumeSpecName: "config-data") pod "794926d8-eba5-44ce-a03f-ebc119f61dde" (UID: "794926d8-eba5-44ce-a03f-ebc119f61dde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.737129 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "794926d8-eba5-44ce-a03f-ebc119f61dde" (UID: "794926d8-eba5-44ce-a03f-ebc119f61dde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.757445 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "794926d8-eba5-44ce-a03f-ebc119f61dde" (UID: "794926d8-eba5-44ce-a03f-ebc119f61dde"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.812327 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.812360 4780 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.812373 4780 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/794926d8-eba5-44ce-a03f-ebc119f61dde-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.812382 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.812390 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/794926d8-eba5-44ce-a03f-ebc119f61dde-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:00 crc kubenswrapper[4780]: I1205 08:24:00.812399 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc8rg\" (UniqueName: \"kubernetes.io/projected/794926d8-eba5-44ce-a03f-ebc119f61dde-kube-api-access-zc8rg\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:01 crc kubenswrapper[4780]: I1205 08:24:01.512067 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f5f688c4-5drqh" Dec 05 08:24:01 crc kubenswrapper[4780]: I1205 08:24:01.549286 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f5f688c4-5drqh"] Dec 05 08:24:01 crc kubenswrapper[4780]: I1205 08:24:01.559775 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f5f688c4-5drqh"] Dec 05 08:24:02 crc kubenswrapper[4780]: I1205 08:24:02.148495 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" path="/var/lib/kubelet/pods/794926d8-eba5-44ce-a03f-ebc119f61dde/volumes" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.035486 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6666c5554b-zfm84"] Dec 05 08:24:11 crc kubenswrapper[4780]: E1205 08:24:11.036442 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon-log" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036455 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon-log" Dec 05 08:24:11 crc kubenswrapper[4780]: E1205 08:24:11.036469 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" containerName="horizon-log" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036476 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" containerName="horizon-log" Dec 05 08:24:11 crc kubenswrapper[4780]: E1205 08:24:11.036490 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036497 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon" Dec 05 08:24:11 crc kubenswrapper[4780]: E1205 08:24:11.036516 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f778cc-f0ab-4209-a99f-24455af7731d" containerName="horizon-log" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036521 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f778cc-f0ab-4209-a99f-24455af7731d" containerName="horizon-log" Dec 05 08:24:11 crc kubenswrapper[4780]: E1205 08:24:11.036543 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" containerName="horizon" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036550 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" containerName="horizon" Dec 05 08:24:11 crc kubenswrapper[4780]: E1205 08:24:11.036573 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f778cc-f0ab-4209-a99f-24455af7731d" containerName="horizon" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036579 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f778cc-f0ab-4209-a99f-24455af7731d" containerName="horizon" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036759 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f778cc-f0ab-4209-a99f-24455af7731d" containerName="horizon-log" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036772 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f778cc-f0ab-4209-a99f-24455af7731d" containerName="horizon" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036783 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036793 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" containerName="horizon" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036813 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="794926d8-eba5-44ce-a03f-ebc119f61dde" containerName="horizon-log" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.036825 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a325ce-e0ac-4bc5-8a9f-1c5d936c5c50" containerName="horizon-log" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.037851 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.047471 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6666c5554b-zfm84"] Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.228057 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/893ee1ed-bef7-42d5-9582-af53d85b3d6d-scripts\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.228141 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893ee1ed-bef7-42d5-9582-af53d85b3d6d-combined-ca-bundle\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.228175 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/893ee1ed-bef7-42d5-9582-af53d85b3d6d-logs\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.228305 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/893ee1ed-bef7-42d5-9582-af53d85b3d6d-horizon-tls-certs\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.228385 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/893ee1ed-bef7-42d5-9582-af53d85b3d6d-config-data\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.228439 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/893ee1ed-bef7-42d5-9582-af53d85b3d6d-horizon-secret-key\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.228582 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chg4\" (UniqueName: \"kubernetes.io/projected/893ee1ed-bef7-42d5-9582-af53d85b3d6d-kube-api-access-4chg4\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.330761 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893ee1ed-bef7-42d5-9582-af53d85b3d6d-combined-ca-bundle\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.330819 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/893ee1ed-bef7-42d5-9582-af53d85b3d6d-logs\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.330841 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/893ee1ed-bef7-42d5-9582-af53d85b3d6d-horizon-tls-certs\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.330868 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/893ee1ed-bef7-42d5-9582-af53d85b3d6d-config-data\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.330938 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/893ee1ed-bef7-42d5-9582-af53d85b3d6d-horizon-secret-key\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.330973 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4chg4\" (UniqueName: \"kubernetes.io/projected/893ee1ed-bef7-42d5-9582-af53d85b3d6d-kube-api-access-4chg4\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.331095 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/893ee1ed-bef7-42d5-9582-af53d85b3d6d-scripts\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.332052 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/893ee1ed-bef7-42d5-9582-af53d85b3d6d-scripts\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.332592 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/893ee1ed-bef7-42d5-9582-af53d85b3d6d-config-data\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.333002 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/893ee1ed-bef7-42d5-9582-af53d85b3d6d-logs\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.338477 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/893ee1ed-bef7-42d5-9582-af53d85b3d6d-horizon-tls-certs\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.338533 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/893ee1ed-bef7-42d5-9582-af53d85b3d6d-combined-ca-bundle\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.338531 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/893ee1ed-bef7-42d5-9582-af53d85b3d6d-horizon-secret-key\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.353840 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4chg4\" (UniqueName: \"kubernetes.io/projected/893ee1ed-bef7-42d5-9582-af53d85b3d6d-kube-api-access-4chg4\") pod \"horizon-6666c5554b-zfm84\" (UID: \"893ee1ed-bef7-42d5-9582-af53d85b3d6d\") " pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.402176 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:11 crc kubenswrapper[4780]: I1205 08:24:11.897203 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6666c5554b-zfm84"] Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.460489 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-9hbt2"] Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.462061 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9hbt2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.475966 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-9hbt2"] Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.558426 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk2cc\" (UniqueName: \"kubernetes.io/projected/64cea429-d3a5-4fb5-a276-42c68329032c-kube-api-access-bk2cc\") pod \"heat-db-create-9hbt2\" (UID: \"64cea429-d3a5-4fb5-a276-42c68329032c\") " pod="openstack/heat-db-create-9hbt2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.559440 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64cea429-d3a5-4fb5-a276-42c68329032c-operator-scripts\") pod \"heat-db-create-9hbt2\" (UID: \"64cea429-d3a5-4fb5-a276-42c68329032c\") " pod="openstack/heat-db-create-9hbt2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.560014 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-6c66-account-create-update-kbcb2"] Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.561513 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6c66-account-create-update-kbcb2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.566263 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.572144 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-6c66-account-create-update-kbcb2"] Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.617465 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6666c5554b-zfm84" event={"ID":"893ee1ed-bef7-42d5-9582-af53d85b3d6d","Type":"ContainerStarted","Data":"4b3670a089317d694b470834d283bf38e9072a511833691b3479409019b34d99"} Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.617509 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6666c5554b-zfm84" event={"ID":"893ee1ed-bef7-42d5-9582-af53d85b3d6d","Type":"ContainerStarted","Data":"a83070040e1b9139e716971d91fa44aba36d2ef2d610c7fae14acab67a1081d7"} Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.617522 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6666c5554b-zfm84" event={"ID":"893ee1ed-bef7-42d5-9582-af53d85b3d6d","Type":"ContainerStarted","Data":"7700390bf6c1458013a52cd60c663ff38469e81b2d517f4c2f21eb9902d0e545"} Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.661419 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk2cc\" (UniqueName: \"kubernetes.io/projected/64cea429-d3a5-4fb5-a276-42c68329032c-kube-api-access-bk2cc\") pod \"heat-db-create-9hbt2\" (UID: \"64cea429-d3a5-4fb5-a276-42c68329032c\") " pod="openstack/heat-db-create-9hbt2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.661683 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64cea429-d3a5-4fb5-a276-42c68329032c-operator-scripts\") pod \"heat-db-create-9hbt2\" (UID: \"64cea429-d3a5-4fb5-a276-42c68329032c\") " pod="openstack/heat-db-create-9hbt2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.661760 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-operator-scripts\") pod \"heat-6c66-account-create-update-kbcb2\" (UID: \"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1\") " pod="openstack/heat-6c66-account-create-update-kbcb2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.661827 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sb5s\" (UniqueName: \"kubernetes.io/projected/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-kube-api-access-5sb5s\") pod \"heat-6c66-account-create-update-kbcb2\" (UID: \"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1\") " pod="openstack/heat-6c66-account-create-update-kbcb2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.662707 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64cea429-d3a5-4fb5-a276-42c68329032c-operator-scripts\") pod \"heat-db-create-9hbt2\" (UID: \"64cea429-d3a5-4fb5-a276-42c68329032c\") " pod="openstack/heat-db-create-9hbt2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.679205 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk2cc\" (UniqueName: \"kubernetes.io/projected/64cea429-d3a5-4fb5-a276-42c68329032c-kube-api-access-bk2cc\") pod \"heat-db-create-9hbt2\" (UID: \"64cea429-d3a5-4fb5-a276-42c68329032c\") " pod="openstack/heat-db-create-9hbt2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.763018 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sb5s\" (UniqueName: \"kubernetes.io/projected/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-kube-api-access-5sb5s\") pod \"heat-6c66-account-create-update-kbcb2\" (UID: \"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1\") " pod="openstack/heat-6c66-account-create-update-kbcb2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.763256 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-operator-scripts\") pod \"heat-6c66-account-create-update-kbcb2\" (UID: \"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1\") " pod="openstack/heat-6c66-account-create-update-kbcb2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.764426 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-operator-scripts\") pod \"heat-6c66-account-create-update-kbcb2\" (UID: \"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1\") " pod="openstack/heat-6c66-account-create-update-kbcb2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.783461 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sb5s\" (UniqueName: \"kubernetes.io/projected/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-kube-api-access-5sb5s\") pod \"heat-6c66-account-create-update-kbcb2\" (UID: \"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1\") " pod="openstack/heat-6c66-account-create-update-kbcb2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.783900 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9hbt2" Dec 05 08:24:12 crc kubenswrapper[4780]: I1205 08:24:12.883206 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6c66-account-create-update-kbcb2" Dec 05 08:24:13 crc kubenswrapper[4780]: I1205 08:24:13.025850 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6666c5554b-zfm84" podStartSLOduration=2.025829685 podStartE2EDuration="2.025829685s" podCreationTimestamp="2025-12-05 08:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:24:12.64326613 +0000 UTC m=+5886.712782482" watchObservedRunningTime="2025-12-05 08:24:13.025829685 +0000 UTC m=+5887.095346007" Dec 05 08:24:13 crc kubenswrapper[4780]: I1205 08:24:13.032691 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-9hbt2"] Dec 05 08:24:13 crc kubenswrapper[4780]: W1205 08:24:13.034919 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64cea429_d3a5_4fb5_a276_42c68329032c.slice/crio-028a19118181878ec3260057c7bd4b815f247a44a20b9a93f6a2f7750fd9d8da WatchSource:0}: Error finding container 028a19118181878ec3260057c7bd4b815f247a44a20b9a93f6a2f7750fd9d8da: Status 404 returned error can't find the container with id 028a19118181878ec3260057c7bd4b815f247a44a20b9a93f6a2f7750fd9d8da Dec 05 08:24:13 crc kubenswrapper[4780]: W1205 08:24:13.355749 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8deb9906_3ae2_4c58_a395_d87d0c8e1ca1.slice/crio-7a2a4292db2f716c254e8ad2e97cdfd1e311b8ba3eb76715fdeab043d9d412a6 WatchSource:0}: Error finding container 7a2a4292db2f716c254e8ad2e97cdfd1e311b8ba3eb76715fdeab043d9d412a6: Status 404 returned error can't find the container with id 7a2a4292db2f716c254e8ad2e97cdfd1e311b8ba3eb76715fdeab043d9d412a6 Dec 05 08:24:13 crc kubenswrapper[4780]: I1205 08:24:13.363091 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-6c66-account-create-update-kbcb2"] Dec 05 08:24:13 crc kubenswrapper[4780]: I1205 08:24:13.626706 4780 generic.go:334] "Generic (PLEG): container finished" podID="64cea429-d3a5-4fb5-a276-42c68329032c" containerID="9e0bb0dd6f7c56c4c35c52fa1f348b0fb3ae58ce738c5e9bbd382b34fe0cdcf2" exitCode=0 Dec 05 08:24:13 crc kubenswrapper[4780]: I1205 08:24:13.626909 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9hbt2" event={"ID":"64cea429-d3a5-4fb5-a276-42c68329032c","Type":"ContainerDied","Data":"9e0bb0dd6f7c56c4c35c52fa1f348b0fb3ae58ce738c5e9bbd382b34fe0cdcf2"} Dec 05 08:24:13 crc kubenswrapper[4780]: I1205 08:24:13.627037 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9hbt2" event={"ID":"64cea429-d3a5-4fb5-a276-42c68329032c","Type":"ContainerStarted","Data":"028a19118181878ec3260057c7bd4b815f247a44a20b9a93f6a2f7750fd9d8da"} Dec 05 08:24:13 crc kubenswrapper[4780]: I1205 08:24:13.628705 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6c66-account-create-update-kbcb2" event={"ID":"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1","Type":"ContainerStarted","Data":"7a2a4292db2f716c254e8ad2e97cdfd1e311b8ba3eb76715fdeab043d9d412a6"} Dec 05 08:24:14 crc kubenswrapper[4780]: I1205 08:24:14.640029 4780 generic.go:334] "Generic (PLEG): container finished" podID="8deb9906-3ae2-4c58-a395-d87d0c8e1ca1" containerID="1a1215b904b69e2affd709432310e76e56f85d801c09755de2ed1649c04623ec" exitCode=0 Dec 05 08:24:14 crc kubenswrapper[4780]: I1205 08:24:14.640092 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6c66-account-create-update-kbcb2" event={"ID":"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1","Type":"ContainerDied","Data":"1a1215b904b69e2affd709432310e76e56f85d801c09755de2ed1649c04623ec"} Dec 05 08:24:15 crc kubenswrapper[4780]: I1205 08:24:15.013951 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9hbt2" Dec 05 08:24:15 crc kubenswrapper[4780]: I1205 08:24:15.124309 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64cea429-d3a5-4fb5-a276-42c68329032c-operator-scripts\") pod \"64cea429-d3a5-4fb5-a276-42c68329032c\" (UID: \"64cea429-d3a5-4fb5-a276-42c68329032c\") " Dec 05 08:24:15 crc kubenswrapper[4780]: I1205 08:24:15.124416 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk2cc\" (UniqueName: \"kubernetes.io/projected/64cea429-d3a5-4fb5-a276-42c68329032c-kube-api-access-bk2cc\") pod \"64cea429-d3a5-4fb5-a276-42c68329032c\" (UID: \"64cea429-d3a5-4fb5-a276-42c68329032c\") " Dec 05 08:24:15 crc kubenswrapper[4780]: I1205 08:24:15.124844 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64cea429-d3a5-4fb5-a276-42c68329032c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64cea429-d3a5-4fb5-a276-42c68329032c" (UID: "64cea429-d3a5-4fb5-a276-42c68329032c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:24:15 crc kubenswrapper[4780]: I1205 08:24:15.125152 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64cea429-d3a5-4fb5-a276-42c68329032c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:15 crc kubenswrapper[4780]: I1205 08:24:15.135120 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64cea429-d3a5-4fb5-a276-42c68329032c-kube-api-access-bk2cc" (OuterVolumeSpecName: "kube-api-access-bk2cc") pod "64cea429-d3a5-4fb5-a276-42c68329032c" (UID: "64cea429-d3a5-4fb5-a276-42c68329032c"). InnerVolumeSpecName "kube-api-access-bk2cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:24:15 crc kubenswrapper[4780]: I1205 08:24:15.227772 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk2cc\" (UniqueName: \"kubernetes.io/projected/64cea429-d3a5-4fb5-a276-42c68329032c-kube-api-access-bk2cc\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:15 crc kubenswrapper[4780]: I1205 08:24:15.653375 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9hbt2" Dec 05 08:24:15 crc kubenswrapper[4780]: I1205 08:24:15.653588 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9hbt2" event={"ID":"64cea429-d3a5-4fb5-a276-42c68329032c","Type":"ContainerDied","Data":"028a19118181878ec3260057c7bd4b815f247a44a20b9a93f6a2f7750fd9d8da"} Dec 05 08:24:15 crc kubenswrapper[4780]: I1205 08:24:15.654002 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="028a19118181878ec3260057c7bd4b815f247a44a20b9a93f6a2f7750fd9d8da" Dec 05 08:24:16 crc kubenswrapper[4780]: I1205 08:24:16.113167 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6c66-account-create-update-kbcb2" Dec 05 08:24:16 crc kubenswrapper[4780]: I1205 08:24:16.251161 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sb5s\" (UniqueName: \"kubernetes.io/projected/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-kube-api-access-5sb5s\") pod \"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1\" (UID: \"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1\") " Dec 05 08:24:16 crc kubenswrapper[4780]: I1205 08:24:16.251226 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-operator-scripts\") pod \"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1\" (UID: \"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1\") " Dec 05 08:24:16 crc kubenswrapper[4780]: I1205 08:24:16.252792 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8deb9906-3ae2-4c58-a395-d87d0c8e1ca1" (UID: "8deb9906-3ae2-4c58-a395-d87d0c8e1ca1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:24:16 crc kubenswrapper[4780]: I1205 08:24:16.256415 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-kube-api-access-5sb5s" (OuterVolumeSpecName: "kube-api-access-5sb5s") pod "8deb9906-3ae2-4c58-a395-d87d0c8e1ca1" (UID: "8deb9906-3ae2-4c58-a395-d87d0c8e1ca1"). InnerVolumeSpecName "kube-api-access-5sb5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:24:16 crc kubenswrapper[4780]: I1205 08:24:16.354189 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sb5s\" (UniqueName: \"kubernetes.io/projected/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-kube-api-access-5sb5s\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:16 crc kubenswrapper[4780]: I1205 08:24:16.354542 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:16 crc kubenswrapper[4780]: I1205 08:24:16.664976 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-6c66-account-create-update-kbcb2" event={"ID":"8deb9906-3ae2-4c58-a395-d87d0c8e1ca1","Type":"ContainerDied","Data":"7a2a4292db2f716c254e8ad2e97cdfd1e311b8ba3eb76715fdeab043d9d412a6"} Dec 05 08:24:16 crc kubenswrapper[4780]: I1205 08:24:16.665011 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2a4292db2f716c254e8ad2e97cdfd1e311b8ba3eb76715fdeab043d9d412a6" Dec 05 08:24:16 crc kubenswrapper[4780]: I1205 08:24:16.665697 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-6c66-account-create-update-kbcb2" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.658808 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-mjjls"] Dec 05 08:24:17 crc kubenswrapper[4780]: E1205 08:24:17.659415 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64cea429-d3a5-4fb5-a276-42c68329032c" containerName="mariadb-database-create" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.659522 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="64cea429-d3a5-4fb5-a276-42c68329032c" containerName="mariadb-database-create" Dec 05 08:24:17 crc kubenswrapper[4780]: E1205 08:24:17.659616 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8deb9906-3ae2-4c58-a395-d87d0c8e1ca1" containerName="mariadb-account-create-update" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.659683 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8deb9906-3ae2-4c58-a395-d87d0c8e1ca1" containerName="mariadb-account-create-update" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.660191 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8deb9906-3ae2-4c58-a395-d87d0c8e1ca1" containerName="mariadb-account-create-update" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.660534 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="64cea429-d3a5-4fb5-a276-42c68329032c" containerName="mariadb-database-create" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.663294 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.671971 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nwptx" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.672034 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.681817 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-mjjls"] Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.786612 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-config-data\") pod \"heat-db-sync-mjjls\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.786712 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxwn\" (UniqueName: \"kubernetes.io/projected/c5a024fc-06ca-42fa-8995-1de8cb2ce874-kube-api-access-bcxwn\") pod \"heat-db-sync-mjjls\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.786781 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-combined-ca-bundle\") pod \"heat-db-sync-mjjls\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.894727 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-combined-ca-bundle\") pod \"heat-db-sync-mjjls\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.895327 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-config-data\") pod \"heat-db-sync-mjjls\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.895465 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcxwn\" (UniqueName: \"kubernetes.io/projected/c5a024fc-06ca-42fa-8995-1de8cb2ce874-kube-api-access-bcxwn\") pod \"heat-db-sync-mjjls\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.906501 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-combined-ca-bundle\") pod \"heat-db-sync-mjjls\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.916184 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-config-data\") pod \"heat-db-sync-mjjls\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.918547 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcxwn\" (UniqueName: \"kubernetes.io/projected/c5a024fc-06ca-42fa-8995-1de8cb2ce874-kube-api-access-bcxwn\") pod \"heat-db-sync-mjjls\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:17 crc kubenswrapper[4780]: I1205 08:24:17.983998 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:18 crc kubenswrapper[4780]: I1205 08:24:18.266959 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-mjjls"] Dec 05 08:24:18 crc kubenswrapper[4780]: W1205 08:24:18.268348 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a024fc_06ca_42fa_8995_1de8cb2ce874.slice/crio-94c570375e8c70330166771ea66255c3c6206264c246e6661ce819fad4fa1660 WatchSource:0}: Error finding container 94c570375e8c70330166771ea66255c3c6206264c246e6661ce819fad4fa1660: Status 404 returned error can't find the container with id 94c570375e8c70330166771ea66255c3c6206264c246e6661ce819fad4fa1660 Dec 05 08:24:18 crc kubenswrapper[4780]: I1205 08:24:18.700921 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mjjls" event={"ID":"c5a024fc-06ca-42fa-8995-1de8cb2ce874","Type":"ContainerStarted","Data":"94c570375e8c70330166771ea66255c3c6206264c246e6661ce819fad4fa1660"} Dec 05 08:24:21 crc kubenswrapper[4780]: I1205 08:24:21.403252 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:21 crc kubenswrapper[4780]: I1205 08:24:21.404574 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:24 crc kubenswrapper[4780]: I1205 08:24:24.073950 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5h2k2"] Dec 05 08:24:24 crc kubenswrapper[4780]: I1205 08:24:24.090871 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-67c5-account-create-update-phb5n"] Dec 05 08:24:24 crc kubenswrapper[4780]: I1205 08:24:24.103717 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5h2k2"] Dec 05 08:24:24 crc kubenswrapper[4780]: I1205 08:24:24.119699 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-67c5-account-create-update-phb5n"] Dec 05 08:24:24 crc kubenswrapper[4780]: I1205 08:24:24.168437 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d1d27de-6a3a-4064-84c7-f7efb078ef9d" path="/var/lib/kubelet/pods/6d1d27de-6a3a-4064-84c7-f7efb078ef9d/volumes" Dec 05 08:24:24 crc kubenswrapper[4780]: I1205 08:24:24.169056 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f787a1a-4869-4803-84a5-6cf8dfac5f48" path="/var/lib/kubelet/pods/8f787a1a-4869-4803-84a5-6cf8dfac5f48/volumes" Dec 05 08:24:29 crc kubenswrapper[4780]: I1205 08:24:29.639477 4780 scope.go:117] "RemoveContainer" containerID="5709cd09c1bd26a1d95445f7b0ff275a6baecb8863d32ea8a246d08c1f0e5b37" Dec 05 08:24:29 crc kubenswrapper[4780]: I1205 08:24:29.662519 4780 scope.go:117] "RemoveContainer" containerID="220751bda835517fa3f8e0c65873054128f856244ab50e39c1ebcfe699e1cf4a" Dec 05 08:24:29 crc kubenswrapper[4780]: I1205 08:24:29.814936 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mjjls" event={"ID":"c5a024fc-06ca-42fa-8995-1de8cb2ce874","Type":"ContainerStarted","Data":"5c0f6bfe1777ec984a944e3d91b0b492d7ff461753608fcdb5109c08980b6bc3"} Dec 05 08:24:29 crc kubenswrapper[4780]: I1205 08:24:29.836847 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-mjjls" podStartSLOduration=2.292707938 podStartE2EDuration="12.836825267s" podCreationTimestamp="2025-12-05 08:24:17 +0000 UTC" firstStartedPulling="2025-12-05 08:24:18.27033823 +0000 UTC m=+5892.339854562" lastFinishedPulling="2025-12-05 08:24:28.814455559 +0000 UTC m=+5902.883971891" observedRunningTime="2025-12-05 08:24:29.828518391 +0000 UTC m=+5903.898034733" watchObservedRunningTime="2025-12-05 08:24:29.836825267 +0000 UTC m=+5903.906341599" Dec 05 08:24:31 crc kubenswrapper[4780]: I1205 08:24:31.404518 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6666c5554b-zfm84" podUID="893ee1ed-bef7-42d5-9582-af53d85b3d6d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.101:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.101:8443: connect: connection refused" Dec 05 08:24:31 crc kubenswrapper[4780]: I1205 08:24:31.832202 4780 generic.go:334] "Generic (PLEG): container finished" podID="c5a024fc-06ca-42fa-8995-1de8cb2ce874" containerID="5c0f6bfe1777ec984a944e3d91b0b492d7ff461753608fcdb5109c08980b6bc3" exitCode=0 Dec 05 08:24:31 crc kubenswrapper[4780]: I1205 08:24:31.832600 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mjjls" event={"ID":"c5a024fc-06ca-42fa-8995-1de8cb2ce874","Type":"ContainerDied","Data":"5c0f6bfe1777ec984a944e3d91b0b492d7ff461753608fcdb5109c08980b6bc3"} Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.085198 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.154451 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-config-data\") pod \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.154551 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcxwn\" (UniqueName: \"kubernetes.io/projected/c5a024fc-06ca-42fa-8995-1de8cb2ce874-kube-api-access-bcxwn\") pod \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.160187 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a024fc-06ca-42fa-8995-1de8cb2ce874-kube-api-access-bcxwn" (OuterVolumeSpecName: "kube-api-access-bcxwn") pod "c5a024fc-06ca-42fa-8995-1de8cb2ce874" (UID: "c5a024fc-06ca-42fa-8995-1de8cb2ce874"). InnerVolumeSpecName "kube-api-access-bcxwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.223791 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-config-data" (OuterVolumeSpecName: "config-data") pod "c5a024fc-06ca-42fa-8995-1de8cb2ce874" (UID: "c5a024fc-06ca-42fa-8995-1de8cb2ce874"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.255936 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-combined-ca-bundle\") pod \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\" (UID: \"c5a024fc-06ca-42fa-8995-1de8cb2ce874\") " Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.256539 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.256558 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcxwn\" (UniqueName: \"kubernetes.io/projected/c5a024fc-06ca-42fa-8995-1de8cb2ce874-kube-api-access-bcxwn\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.278066 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5a024fc-06ca-42fa-8995-1de8cb2ce874" (UID: "c5a024fc-06ca-42fa-8995-1de8cb2ce874"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.359477 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a024fc-06ca-42fa-8995-1de8cb2ce874-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.848466 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-mjjls" event={"ID":"c5a024fc-06ca-42fa-8995-1de8cb2ce874","Type":"ContainerDied","Data":"94c570375e8c70330166771ea66255c3c6206264c246e6661ce819fad4fa1660"} Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.848504 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c570375e8c70330166771ea66255c3c6206264c246e6661ce819fad4fa1660" Dec 05 08:24:33 crc kubenswrapper[4780]: I1205 08:24:33.848512 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-mjjls" Dec 05 08:24:34 crc kubenswrapper[4780]: I1205 08:24:34.027156 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zxqc4"] Dec 05 08:24:34 crc kubenswrapper[4780]: I1205 08:24:34.036571 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zxqc4"] Dec 05 08:24:34 crc kubenswrapper[4780]: I1205 08:24:34.149788 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0600caa7-8925-4bd9-adde-5ffbc2b3e732" path="/var/lib/kubelet/pods/0600caa7-8925-4bd9-adde-5ffbc2b3e732/volumes" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.122454 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-77548cd754-2b6cx"] Dec 05 08:24:35 crc kubenswrapper[4780]: E1205 08:24:35.122944 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a024fc-06ca-42fa-8995-1de8cb2ce874" containerName="heat-db-sync" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.122966 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a024fc-06ca-42fa-8995-1de8cb2ce874" containerName="heat-db-sync" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.123218 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a024fc-06ca-42fa-8995-1de8cb2ce874" containerName="heat-db-sync" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.124973 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.126590 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.127448 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-nwptx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.127900 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.155145 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-77548cd754-2b6cx"] Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.272497 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-85784df95c-2kxps"] Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.274186 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.281255 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.293964 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-85784df95c-2kxps"] Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.301736 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.301817 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data-custom\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.301902 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddhxj\" (UniqueName: \"kubernetes.io/projected/b20abb6a-39dc-49a3-8289-01e85cd7de38-kube-api-access-ddhxj\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.302211 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-combined-ca-bundle\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.314603 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6f46cd574c-ssrn4"] Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.316325 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.318987 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.329565 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f46cd574c-ssrn4"] Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.404640 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkk6n\" (UniqueName: \"kubernetes.io/projected/db88cf84-bc92-4553-aec7-137af48fb72c-kube-api-access-mkk6n\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.404762 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.404806 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-combined-ca-bundle\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.404835 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data-custom\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.404870 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwqbk\" (UniqueName: \"kubernetes.io/projected/f56294b1-f726-4951-a56d-4f02584c11cf-kube-api-access-xwqbk\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.404910 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddhxj\" (UniqueName: \"kubernetes.io/projected/b20abb6a-39dc-49a3-8289-01e85cd7de38-kube-api-access-ddhxj\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.404929 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data-custom\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.405002 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-combined-ca-bundle\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.405054 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data-custom\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.405096 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.405132 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.405163 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-combined-ca-bundle\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.411186 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data-custom\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.411818 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-combined-ca-bundle\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.412667 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.430582 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddhxj\" (UniqueName: \"kubernetes.io/projected/b20abb6a-39dc-49a3-8289-01e85cd7de38-kube-api-access-ddhxj\") pod \"heat-engine-77548cd754-2b6cx\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.454028 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.507037 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwqbk\" (UniqueName: \"kubernetes.io/projected/f56294b1-f726-4951-a56d-4f02584c11cf-kube-api-access-xwqbk\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.507084 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data-custom\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.507154 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-combined-ca-bundle\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.507209 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data-custom\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.507254 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.507288 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.507355 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkk6n\" (UniqueName: \"kubernetes.io/projected/db88cf84-bc92-4553-aec7-137af48fb72c-kube-api-access-mkk6n\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.507399 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-combined-ca-bundle\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.520135 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.520541 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-combined-ca-bundle\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.524827 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data-custom\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.526587 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-combined-ca-bundle\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.527867 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data-custom\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.530547 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkk6n\" (UniqueName: \"kubernetes.io/projected/db88cf84-bc92-4553-aec7-137af48fb72c-kube-api-access-mkk6n\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.534743 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data\") pod \"heat-api-6f46cd574c-ssrn4\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.547664 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwqbk\" (UniqueName: \"kubernetes.io/projected/f56294b1-f726-4951-a56d-4f02584c11cf-kube-api-access-xwqbk\") pod \"heat-cfnapi-85784df95c-2kxps\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.599479 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:35 crc kubenswrapper[4780]: I1205 08:24:35.644985 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:36 crc kubenswrapper[4780]: I1205 08:24:36.061447 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-77548cd754-2b6cx"] Dec 05 08:24:36 crc kubenswrapper[4780]: I1205 08:24:36.227734 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-85784df95c-2kxps"] Dec 05 08:24:36 crc kubenswrapper[4780]: I1205 08:24:36.321188 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f46cd574c-ssrn4"] Dec 05 08:24:36 crc kubenswrapper[4780]: I1205 08:24:36.913974 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77548cd754-2b6cx" event={"ID":"b20abb6a-39dc-49a3-8289-01e85cd7de38","Type":"ContainerStarted","Data":"78ef36564ec5cbb2673a28b50344efac4e7c3c7b5fa4f2a14a0ca53c5ebe1078"} Dec 05 08:24:36 crc kubenswrapper[4780]: I1205 08:24:36.914045 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77548cd754-2b6cx" event={"ID":"b20abb6a-39dc-49a3-8289-01e85cd7de38","Type":"ContainerStarted","Data":"16fba49ea1be7a58f067545b7baa4c6ce2915ce96fb2e4e633b0414c5aa8683f"} Dec 05 08:24:36 crc kubenswrapper[4780]: I1205 08:24:36.914262 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:36 crc kubenswrapper[4780]: I1205 08:24:36.916511 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f46cd574c-ssrn4" event={"ID":"db88cf84-bc92-4553-aec7-137af48fb72c","Type":"ContainerStarted","Data":"7e2425fc0e43f4c3d19064a6d7f3a2cd4f31121c22fc673364cc6ee0d78837de"} Dec 05 08:24:36 crc kubenswrapper[4780]: I1205 08:24:36.918956 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-85784df95c-2kxps" event={"ID":"f56294b1-f726-4951-a56d-4f02584c11cf","Type":"ContainerStarted","Data":"f4246103e4bbde6a645c6ecd96a6a3872b423ce18f7a3bac319f0fad44e6be04"} Dec 05 08:24:36 crc kubenswrapper[4780]: I1205 08:24:36.937291 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-77548cd754-2b6cx" podStartSLOduration=1.937272851 podStartE2EDuration="1.937272851s" podCreationTimestamp="2025-12-05 08:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:24:36.933003075 +0000 UTC m=+5911.002519417" watchObservedRunningTime="2025-12-05 08:24:36.937272851 +0000 UTC m=+5911.006789183" Dec 05 08:24:38 crc kubenswrapper[4780]: I1205 08:24:38.947992 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-85784df95c-2kxps" event={"ID":"f56294b1-f726-4951-a56d-4f02584c11cf","Type":"ContainerStarted","Data":"f9c607fe821d75b22519653961ac5dcedc8789dc58b29eed860ba3221d8651fe"} Dec 05 08:24:38 crc kubenswrapper[4780]: I1205 08:24:38.949711 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:38 crc kubenswrapper[4780]: I1205 08:24:38.951266 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f46cd574c-ssrn4" event={"ID":"db88cf84-bc92-4553-aec7-137af48fb72c","Type":"ContainerStarted","Data":"be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3"} Dec 05 08:24:38 crc kubenswrapper[4780]: I1205 08:24:38.951938 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:39 crc kubenswrapper[4780]: I1205 08:24:39.011445 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-85784df95c-2kxps" podStartSLOduration=1.9069808670000001 podStartE2EDuration="4.011419465s" podCreationTimestamp="2025-12-05 08:24:35 +0000 UTC" firstStartedPulling="2025-12-05 08:24:36.236216443 +0000 UTC m=+5910.305732785" lastFinishedPulling="2025-12-05 08:24:38.340655051 +0000 UTC m=+5912.410171383" observedRunningTime="2025-12-05 08:24:38.971263583 +0000 UTC m=+5913.040779925" watchObservedRunningTime="2025-12-05 08:24:39.011419465 +0000 UTC m=+5913.080935797" Dec 05 08:24:39 crc kubenswrapper[4780]: I1205 08:24:39.012957 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6f46cd574c-ssrn4" podStartSLOduration=1.995584177 podStartE2EDuration="4.012947587s" podCreationTimestamp="2025-12-05 08:24:35 +0000 UTC" firstStartedPulling="2025-12-05 08:24:36.325792759 +0000 UTC m=+5910.395309091" lastFinishedPulling="2025-12-05 08:24:38.343156169 +0000 UTC m=+5912.412672501" observedRunningTime="2025-12-05 08:24:38.996911341 +0000 UTC m=+5913.066427683" watchObservedRunningTime="2025-12-05 08:24:39.012947587 +0000 UTC m=+5913.082463919" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.437207 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w58zl"] Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.440242 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.468799 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w58zl"] Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.528369 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-catalog-content\") pod \"certified-operators-w58zl\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.528412 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6tv\" (UniqueName: \"kubernetes.io/projected/1b057958-a92b-434c-b605-3d2eb2178103-kube-api-access-cg6tv\") pod \"certified-operators-w58zl\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.528448 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-utilities\") pod \"certified-operators-w58zl\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.630063 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-catalog-content\") pod \"certified-operators-w58zl\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.630409 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg6tv\" (UniqueName: \"kubernetes.io/projected/1b057958-a92b-434c-b605-3d2eb2178103-kube-api-access-cg6tv\") pod \"certified-operators-w58zl\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.630444 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-utilities\") pod \"certified-operators-w58zl\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.630705 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-catalog-content\") pod \"certified-operators-w58zl\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.630840 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-utilities\") pod \"certified-operators-w58zl\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.655740 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg6tv\" (UniqueName: \"kubernetes.io/projected/1b057958-a92b-434c-b605-3d2eb2178103-kube-api-access-cg6tv\") pod \"certified-operators-w58zl\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:40 crc kubenswrapper[4780]: I1205 08:24:40.778244 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:41 crc kubenswrapper[4780]: I1205 08:24:41.495728 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w58zl"] Dec 05 08:24:42 crc kubenswrapper[4780]: I1205 08:24:42.031733 4780 generic.go:334] "Generic (PLEG): container finished" podID="1b057958-a92b-434c-b605-3d2eb2178103" containerID="3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa" exitCode=0 Dec 05 08:24:42 crc kubenswrapper[4780]: I1205 08:24:42.032046 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58zl" event={"ID":"1b057958-a92b-434c-b605-3d2eb2178103","Type":"ContainerDied","Data":"3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa"} Dec 05 08:24:42 crc kubenswrapper[4780]: I1205 08:24:42.032078 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58zl" event={"ID":"1b057958-a92b-434c-b605-3d2eb2178103","Type":"ContainerStarted","Data":"61003ba2082ef2a329dded5b80133f60fe1227c20c110946a4980ad0beafcf21"} Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.049647 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58zl" event={"ID":"1b057958-a92b-434c-b605-3d2eb2178103","Type":"ContainerStarted","Data":"cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1"} Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.430819 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-764bcd996f-zrjvc"] Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.433154 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.455339 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-56d94fcd9d-b8gml"] Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.457080 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.477487 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5dd47f8876-qrmx7"] Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.493539 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.494942 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-56d94fcd9d-b8gml"] Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.496990 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-combined-ca-bundle\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.497114 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmw26\" (UniqueName: \"kubernetes.io/projected/37084daf-78db-44a5-b894-e9baa0a4bf10-kube-api-access-vmw26\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.497206 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data-custom\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.497306 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.497493 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.527964 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-764bcd996f-zrjvc"] Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.562457 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5dd47f8876-qrmx7"] Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.599594 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmw26\" (UniqueName: \"kubernetes.io/projected/37084daf-78db-44a5-b894-e9baa0a4bf10-kube-api-access-vmw26\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.599655 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-combined-ca-bundle\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.599704 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-combined-ca-bundle\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.599757 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.599785 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data-custom\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.599802 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-config-data-custom\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.599827 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data-custom\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.599864 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lpnv\" (UniqueName: \"kubernetes.io/projected/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-kube-api-access-7lpnv\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.599909 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpprz\" (UniqueName: \"kubernetes.io/projected/3c7df4ad-2d52-471f-bbcd-b58afc961f24-kube-api-access-zpprz\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.599987 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.600018 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-config-data\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.600049 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-combined-ca-bundle\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.607222 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data-custom\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.613134 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.613780 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-combined-ca-bundle\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.618981 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmw26\" (UniqueName: \"kubernetes.io/projected/37084daf-78db-44a5-b894-e9baa0a4bf10-kube-api-access-vmw26\") pod \"heat-api-764bcd996f-zrjvc\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.701895 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-combined-ca-bundle\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.701960 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-combined-ca-bundle\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.702018 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.702057 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data-custom\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.702099 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-config-data-custom\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.702145 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lpnv\" (UniqueName: \"kubernetes.io/projected/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-kube-api-access-7lpnv\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.702173 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpprz\" (UniqueName: \"kubernetes.io/projected/3c7df4ad-2d52-471f-bbcd-b58afc961f24-kube-api-access-zpprz\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.702227 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-config-data\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.707901 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-config-data\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.708581 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-combined-ca-bundle\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.710353 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data-custom\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.711041 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-combined-ca-bundle\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.711649 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-config-data-custom\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.712461 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.720277 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lpnv\" (UniqueName: \"kubernetes.io/projected/ae3f2dd4-7f98-4fad-a85b-a2fec900c371-kube-api-access-7lpnv\") pod \"heat-engine-5dd47f8876-qrmx7\" (UID: \"ae3f2dd4-7f98-4fad-a85b-a2fec900c371\") " pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.732226 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpprz\" (UniqueName: \"kubernetes.io/projected/3c7df4ad-2d52-471f-bbcd-b58afc961f24-kube-api-access-zpprz\") pod \"heat-cfnapi-56d94fcd9d-b8gml\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.753262 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.774603 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:43 crc kubenswrapper[4780]: I1205 08:24:43.820804 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:44 crc kubenswrapper[4780]: I1205 08:24:44.112124 4780 generic.go:334] "Generic (PLEG): container finished" podID="1b057958-a92b-434c-b605-3d2eb2178103" containerID="cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1" exitCode=0 Dec 05 08:24:44 crc kubenswrapper[4780]: I1205 08:24:44.112468 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58zl" event={"ID":"1b057958-a92b-434c-b605-3d2eb2178103","Type":"ContainerDied","Data":"cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1"} Dec 05 08:24:44 crc kubenswrapper[4780]: I1205 08:24:44.420088 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-764bcd996f-zrjvc"] Dec 05 08:24:44 crc kubenswrapper[4780]: I1205 08:24:44.469416 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-56d94fcd9d-b8gml"] Dec 05 08:24:44 crc kubenswrapper[4780]: I1205 08:24:44.630554 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5dd47f8876-qrmx7"] Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.122484 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5dd47f8876-qrmx7" event={"ID":"ae3f2dd4-7f98-4fad-a85b-a2fec900c371","Type":"ContainerStarted","Data":"a349d499f4fa4ac74c6417242e0c31152a4dedc28aeea22946799bc474367293"} Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.122965 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5dd47f8876-qrmx7" event={"ID":"ae3f2dd4-7f98-4fad-a85b-a2fec900c371","Type":"ContainerStarted","Data":"1f885f9bc317098f9fb0ec054e9da978bd7ba04086786664107d20885c5e723a"} Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.122990 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.125196 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58zl" event={"ID":"1b057958-a92b-434c-b605-3d2eb2178103","Type":"ContainerStarted","Data":"84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9"} Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.126910 4780 generic.go:334] "Generic (PLEG): container finished" podID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" containerID="1fc394bfc85d90e5f1d921aa55e5cd2428947a9f18141f80c7e6d80afd4428eb" exitCode=1 Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.126947 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" event={"ID":"3c7df4ad-2d52-471f-bbcd-b58afc961f24","Type":"ContainerDied","Data":"1fc394bfc85d90e5f1d921aa55e5cd2428947a9f18141f80c7e6d80afd4428eb"} Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.126998 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" event={"ID":"3c7df4ad-2d52-471f-bbcd-b58afc961f24","Type":"ContainerStarted","Data":"709fc0bd05a96278b3ed94dd0de2349fca6c2538eeb8c3e93e4bea48197f7c98"} Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.128004 4780 scope.go:117] "RemoveContainer" containerID="1fc394bfc85d90e5f1d921aa55e5cd2428947a9f18141f80c7e6d80afd4428eb" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.128498 4780 generic.go:334] "Generic (PLEG): container finished" podID="37084daf-78db-44a5-b894-e9baa0a4bf10" containerID="657a101b279f31037db9c8a2b3551642d5dbfadcfe2b6eb060251858813ee09d" exitCode=1 Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.128534 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-764bcd996f-zrjvc" event={"ID":"37084daf-78db-44a5-b894-e9baa0a4bf10","Type":"ContainerDied","Data":"657a101b279f31037db9c8a2b3551642d5dbfadcfe2b6eb060251858813ee09d"} Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.128561 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-764bcd996f-zrjvc" event={"ID":"37084daf-78db-44a5-b894-e9baa0a4bf10","Type":"ContainerStarted","Data":"120ab9bd49df0bf06fa9d1c085842fc3529feb17a93aa2d66b03aadc6705291b"} Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.128843 4780 scope.go:117] "RemoveContainer" containerID="657a101b279f31037db9c8a2b3551642d5dbfadcfe2b6eb060251858813ee09d" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.146720 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5dd47f8876-qrmx7" podStartSLOduration=2.146694878 podStartE2EDuration="2.146694878s" podCreationTimestamp="2025-12-05 08:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:24:45.144011085 +0000 UTC m=+5919.213527437" watchObservedRunningTime="2025-12-05 08:24:45.146694878 +0000 UTC m=+5919.216211210" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.162345 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w58zl" podStartSLOduration=2.649492736 podStartE2EDuration="5.162325113s" podCreationTimestamp="2025-12-05 08:24:40 +0000 UTC" firstStartedPulling="2025-12-05 08:24:42.033930124 +0000 UTC m=+5916.103446456" lastFinishedPulling="2025-12-05 08:24:44.546762501 +0000 UTC m=+5918.616278833" observedRunningTime="2025-12-05 08:24:45.159457625 +0000 UTC m=+5919.228973947" watchObservedRunningTime="2025-12-05 08:24:45.162325113 +0000 UTC m=+5919.231841445" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.388582 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f46cd574c-ssrn4"] Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.389245 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6f46cd574c-ssrn4" podUID="db88cf84-bc92-4553-aec7-137af48fb72c" containerName="heat-api" containerID="cri-o://be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3" gracePeriod=60 Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.425843 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5894959478-n5k57"] Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.437545 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.448054 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5894959478-n5k57"] Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.448861 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.449678 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.479240 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-85784df95c-2kxps"] Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.479493 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-85784df95c-2kxps" podUID="f56294b1-f726-4951-a56d-4f02584c11cf" containerName="heat-cfnapi" containerID="cri-o://f9c607fe821d75b22519653961ac5dcedc8789dc58b29eed860ba3221d8651fe" gracePeriod=60 Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.545960 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-576847467c-n7nz9"] Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.547677 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.554103 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.567178 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.573005 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-config-data\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.573101 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-internal-tls-certs\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.573139 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-config-data-custom\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.573219 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqp4p\" (UniqueName: \"kubernetes.io/projected/d7dba673-a0f0-4f16-8680-4701afda88b9-kube-api-access-mqp4p\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.573241 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-public-tls-certs\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.573267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-combined-ca-bundle\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.576926 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-576847467c-n7nz9"] Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.674703 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-config-data\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.674779 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-internal-tls-certs\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.674816 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-config-data-custom\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.674864 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-internal-tls-certs\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.674992 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-public-tls-certs\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.675055 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqp4p\" (UniqueName: \"kubernetes.io/projected/d7dba673-a0f0-4f16-8680-4701afda88b9-kube-api-access-mqp4p\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.675082 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-public-tls-certs\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.675108 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-combined-ca-bundle\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.675128 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-config-data-custom\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.675154 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-combined-ca-bundle\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.675172 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6l6g\" (UniqueName: \"kubernetes.io/projected/fdbbc584-4e90-4427-88e8-88fe14a459f6-kube-api-access-t6l6g\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.675201 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-config-data\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.685033 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-combined-ca-bundle\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.687570 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-internal-tls-certs\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.693952 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-config-data\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.700492 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-config-data-custom\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.701211 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7dba673-a0f0-4f16-8680-4701afda88b9-public-tls-certs\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.706943 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqp4p\" (UniqueName: \"kubernetes.io/projected/d7dba673-a0f0-4f16-8680-4701afda88b9-kube-api-access-mqp4p\") pod \"heat-api-5894959478-n5k57\" (UID: \"d7dba673-a0f0-4f16-8680-4701afda88b9\") " pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.777546 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-config-data-custom\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.777609 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-combined-ca-bundle\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.777645 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6l6g\" (UniqueName: \"kubernetes.io/projected/fdbbc584-4e90-4427-88e8-88fe14a459f6-kube-api-access-t6l6g\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.777682 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-config-data\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.777848 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-internal-tls-certs\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.777894 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-public-tls-certs\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.785663 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-public-tls-certs\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.785896 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.786222 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-internal-tls-certs\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.787666 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-config-data\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.796787 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-combined-ca-bundle\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.797780 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdbbc584-4e90-4427-88e8-88fe14a459f6-config-data-custom\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.800245 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6l6g\" (UniqueName: \"kubernetes.io/projected/fdbbc584-4e90-4427-88e8-88fe14a459f6-kube-api-access-t6l6g\") pod \"heat-cfnapi-576847467c-n7nz9\" (UID: \"fdbbc584-4e90-4427-88e8-88fe14a459f6\") " pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.815361 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-85784df95c-2kxps" podUID="f56294b1-f726-4951-a56d-4f02584c11cf" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.106:8000/healthcheck\": read tcp 10.217.0.2:58700->10.217.1.106:8000: read: connection reset by peer" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.816896 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-85784df95c-2kxps" podUID="f56294b1-f726-4951-a56d-4f02584c11cf" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.106:8000/healthcheck\": dial tcp 10.217.1.106:8000: connect: connection refused" Dec 05 08:24:45 crc kubenswrapper[4780]: I1205 08:24:45.918212 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.121808 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6666c5554b-zfm84" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.156950 4780 generic.go:334] "Generic (PLEG): container finished" podID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" containerID="f5e7ab30f1850a09ca45c4ae245fa72edb05acbacf9abf08f42e6212f0fe3dc6" exitCode=1 Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.169812 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" event={"ID":"3c7df4ad-2d52-471f-bbcd-b58afc961f24","Type":"ContainerDied","Data":"f5e7ab30f1850a09ca45c4ae245fa72edb05acbacf9abf08f42e6212f0fe3dc6"} Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.169856 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-764bcd996f-zrjvc" event={"ID":"37084daf-78db-44a5-b894-e9baa0a4bf10","Type":"ContainerDied","Data":"4f7009220a02273893394bf69ed9df4647a23effb16ad3e56a6eed526a5e9767"} Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.169858 4780 generic.go:334] "Generic (PLEG): container finished" podID="37084daf-78db-44a5-b894-e9baa0a4bf10" containerID="4f7009220a02273893394bf69ed9df4647a23effb16ad3e56a6eed526a5e9767" exitCode=1 Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.169897 4780 scope.go:117] "RemoveContainer" containerID="1fc394bfc85d90e5f1d921aa55e5cd2428947a9f18141f80c7e6d80afd4428eb" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.170690 4780 scope.go:117] "RemoveContainer" containerID="4f7009220a02273893394bf69ed9df4647a23effb16ad3e56a6eed526a5e9767" Dec 05 08:24:46 crc kubenswrapper[4780]: E1205 08:24:46.170990 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-764bcd996f-zrjvc_openstack(37084daf-78db-44a5-b894-e9baa0a4bf10)\"" pod="openstack/heat-api-764bcd996f-zrjvc" podUID="37084daf-78db-44a5-b894-e9baa0a4bf10" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.173803 4780 scope.go:117] "RemoveContainer" containerID="f5e7ab30f1850a09ca45c4ae245fa72edb05acbacf9abf08f42e6212f0fe3dc6" Dec 05 08:24:46 crc kubenswrapper[4780]: E1205 08:24:46.174228 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-56d94fcd9d-b8gml_openstack(3c7df4ad-2d52-471f-bbcd-b58afc961f24)\"" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.186210 4780 generic.go:334] "Generic (PLEG): container finished" podID="f56294b1-f726-4951-a56d-4f02584c11cf" containerID="f9c607fe821d75b22519653961ac5dcedc8789dc58b29eed860ba3221d8651fe" exitCode=0 Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.187148 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-85784df95c-2kxps" event={"ID":"f56294b1-f726-4951-a56d-4f02584c11cf","Type":"ContainerDied","Data":"f9c607fe821d75b22519653961ac5dcedc8789dc58b29eed860ba3221d8651fe"} Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.195644 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f88df4d7b-87h9z"] Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.196268 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f88df4d7b-87h9z" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon-log" containerID="cri-o://5303de4171d90ef2e51e37ab70c95e47fdbd5177f692ed0623b5c07fdaa8883d" gracePeriod=30 Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.196422 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f88df4d7b-87h9z" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon" containerID="cri-o://d9cdc66c27c98e8dd4b2bef0fe45e80d3d8b34eabf923e201dfa07f535eccde2" gracePeriod=30 Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.225010 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-f88df4d7b-87h9z" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.98:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.225010 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-f88df4d7b-87h9z" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.98:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.239861 4780 scope.go:117] "RemoveContainer" containerID="657a101b279f31037db9c8a2b3551642d5dbfadcfe2b6eb060251858813ee09d" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.468650 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.601488 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data-custom\") pod \"f56294b1-f726-4951-a56d-4f02584c11cf\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.601737 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-combined-ca-bundle\") pod \"f56294b1-f726-4951-a56d-4f02584c11cf\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.601959 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data\") pod \"f56294b1-f726-4951-a56d-4f02584c11cf\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.601987 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwqbk\" (UniqueName: \"kubernetes.io/projected/f56294b1-f726-4951-a56d-4f02584c11cf-kube-api-access-xwqbk\") pod \"f56294b1-f726-4951-a56d-4f02584c11cf\" (UID: \"f56294b1-f726-4951-a56d-4f02584c11cf\") " Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.618165 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56294b1-f726-4951-a56d-4f02584c11cf-kube-api-access-xwqbk" (OuterVolumeSpecName: "kube-api-access-xwqbk") pod "f56294b1-f726-4951-a56d-4f02584c11cf" (UID: "f56294b1-f726-4951-a56d-4f02584c11cf"). InnerVolumeSpecName "kube-api-access-xwqbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.618554 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f56294b1-f726-4951-a56d-4f02584c11cf" (UID: "f56294b1-f726-4951-a56d-4f02584c11cf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.637146 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f56294b1-f726-4951-a56d-4f02584c11cf" (UID: "f56294b1-f726-4951-a56d-4f02584c11cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.680875 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data" (OuterVolumeSpecName: "config-data") pod "f56294b1-f726-4951-a56d-4f02584c11cf" (UID: "f56294b1-f726-4951-a56d-4f02584c11cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.683635 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5894959478-n5k57"] Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.706268 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.706344 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwqbk\" (UniqueName: \"kubernetes.io/projected/f56294b1-f726-4951-a56d-4f02584c11cf-kube-api-access-xwqbk\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.706359 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.706379 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f56294b1-f726-4951-a56d-4f02584c11cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:46 crc kubenswrapper[4780]: I1205 08:24:46.758640 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-576847467c-n7nz9"] Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.196913 4780 scope.go:117] "RemoveContainer" containerID="f5e7ab30f1850a09ca45c4ae245fa72edb05acbacf9abf08f42e6212f0fe3dc6" Dec 05 08:24:47 crc kubenswrapper[4780]: E1205 08:24:47.197153 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-56d94fcd9d-b8gml_openstack(3c7df4ad-2d52-471f-bbcd-b58afc961f24)\"" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.201526 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-576847467c-n7nz9" event={"ID":"fdbbc584-4e90-4427-88e8-88fe14a459f6","Type":"ContainerStarted","Data":"33af8378b1e643f48b6415328e641434a4a9e43db9b027a3d8cab00dcba05b19"} Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.201690 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-576847467c-n7nz9" event={"ID":"fdbbc584-4e90-4427-88e8-88fe14a459f6","Type":"ContainerStarted","Data":"75e309556d3ce16eabf45022cb289c83b2a435742f1d84f66bbf50348e925e57"} Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.202626 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.204436 4780 scope.go:117] "RemoveContainer" containerID="4f7009220a02273893394bf69ed9df4647a23effb16ad3e56a6eed526a5e9767" Dec 05 08:24:47 crc kubenswrapper[4780]: E1205 08:24:47.204709 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-764bcd996f-zrjvc_openstack(37084daf-78db-44a5-b894-e9baa0a4bf10)\"" pod="openstack/heat-api-764bcd996f-zrjvc" podUID="37084daf-78db-44a5-b894-e9baa0a4bf10" Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.206606 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-85784df95c-2kxps" Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.206602 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-85784df95c-2kxps" event={"ID":"f56294b1-f726-4951-a56d-4f02584c11cf","Type":"ContainerDied","Data":"f4246103e4bbde6a645c6ecd96a6a3872b423ce18f7a3bac319f0fad44e6be04"} Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.206850 4780 scope.go:117] "RemoveContainer" containerID="f9c607fe821d75b22519653961ac5dcedc8789dc58b29eed860ba3221d8651fe" Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.215336 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5894959478-n5k57" event={"ID":"d7dba673-a0f0-4f16-8680-4701afda88b9","Type":"ContainerStarted","Data":"6c60e829094c9a308c06f89e681e6bfb3019e6738b33768398a97ad6a43d98b2"} Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.215410 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5894959478-n5k57" event={"ID":"d7dba673-a0f0-4f16-8680-4701afda88b9","Type":"ContainerStarted","Data":"6dcb8440356602725b3a2d7bcdf1d543f2f3a90121372afe1fec1cdae96f799e"} Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.215733 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.268699 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-576847467c-n7nz9" podStartSLOduration=2.268680994 podStartE2EDuration="2.268680994s" podCreationTimestamp="2025-12-05 08:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:24:47.260275445 +0000 UTC m=+5921.329791797" watchObservedRunningTime="2025-12-05 08:24:47.268680994 +0000 UTC m=+5921.338197326" Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.320392 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5894959478-n5k57" podStartSLOduration=2.320369599 podStartE2EDuration="2.320369599s" podCreationTimestamp="2025-12-05 08:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:24:47.316931636 +0000 UTC m=+5921.386447978" watchObservedRunningTime="2025-12-05 08:24:47.320369599 +0000 UTC m=+5921.389885931" Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.342401 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-85784df95c-2kxps"] Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.358086 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-85784df95c-2kxps"] Dec 05 08:24:47 crc kubenswrapper[4780]: I1205 08:24:47.677526 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:48 crc kubenswrapper[4780]: I1205 08:24:48.150377 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56294b1-f726-4951-a56d-4f02584c11cf" path="/var/lib/kubelet/pods/f56294b1-f726-4951-a56d-4f02584c11cf/volumes" Dec 05 08:24:48 crc kubenswrapper[4780]: I1205 08:24:48.754132 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:48 crc kubenswrapper[4780]: I1205 08:24:48.754192 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:48 crc kubenswrapper[4780]: I1205 08:24:48.754976 4780 scope.go:117] "RemoveContainer" containerID="4f7009220a02273893394bf69ed9df4647a23effb16ad3e56a6eed526a5e9767" Dec 05 08:24:48 crc kubenswrapper[4780]: E1205 08:24:48.755209 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-764bcd996f-zrjvc_openstack(37084daf-78db-44a5-b894-e9baa0a4bf10)\"" pod="openstack/heat-api-764bcd996f-zrjvc" podUID="37084daf-78db-44a5-b894-e9baa0a4bf10" Dec 05 08:24:48 crc kubenswrapper[4780]: I1205 08:24:48.774689 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:48 crc kubenswrapper[4780]: I1205 08:24:48.774897 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:48 crc kubenswrapper[4780]: I1205 08:24:48.775831 4780 scope.go:117] "RemoveContainer" containerID="f5e7ab30f1850a09ca45c4ae245fa72edb05acbacf9abf08f42e6212f0fe3dc6" Dec 05 08:24:48 crc kubenswrapper[4780]: E1205 08:24:48.776169 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-56d94fcd9d-b8gml_openstack(3c7df4ad-2d52-471f-bbcd-b58afc961f24)\"" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" Dec 05 08:24:49 crc kubenswrapper[4780]: I1205 08:24:49.235753 4780 scope.go:117] "RemoveContainer" containerID="f5e7ab30f1850a09ca45c4ae245fa72edb05acbacf9abf08f42e6212f0fe3dc6" Dec 05 08:24:49 crc kubenswrapper[4780]: E1205 08:24:49.236714 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-56d94fcd9d-b8gml_openstack(3c7df4ad-2d52-471f-bbcd-b58afc961f24)\"" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" Dec 05 08:24:50 crc kubenswrapper[4780]: I1205 08:24:50.244975 4780 generic.go:334] "Generic (PLEG): container finished" podID="f2c00068-c308-4340-9d9a-58430981cadf" containerID="d9cdc66c27c98e8dd4b2bef0fe45e80d3d8b34eabf923e201dfa07f535eccde2" exitCode=0 Dec 05 08:24:50 crc kubenswrapper[4780]: I1205 08:24:50.245025 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f88df4d7b-87h9z" event={"ID":"f2c00068-c308-4340-9d9a-58430981cadf","Type":"ContainerDied","Data":"d9cdc66c27c98e8dd4b2bef0fe45e80d3d8b34eabf923e201dfa07f535eccde2"} Dec 05 08:24:50 crc kubenswrapper[4780]: I1205 08:24:50.778413 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:50 crc kubenswrapper[4780]: I1205 08:24:50.778744 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:50 crc kubenswrapper[4780]: I1205 08:24:50.792721 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6f46cd574c-ssrn4" podUID="db88cf84-bc92-4553-aec7-137af48fb72c" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.107:8004/healthcheck\": read tcp 10.217.0.2:41128->10.217.1.107:8004: read: connection reset by peer" Dec 05 08:24:50 crc kubenswrapper[4780]: I1205 08:24:50.793306 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6f46cd574c-ssrn4" podUID="db88cf84-bc92-4553-aec7-137af48fb72c" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.107:8004/healthcheck\": dial tcp 10.217.1.107:8004: connect: connection refused" Dec 05 08:24:50 crc kubenswrapper[4780]: I1205 08:24:50.831052 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.249569 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.255667 4780 generic.go:334] "Generic (PLEG): container finished" podID="db88cf84-bc92-4553-aec7-137af48fb72c" containerID="be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3" exitCode=0 Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.255733 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f46cd574c-ssrn4" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.255756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f46cd574c-ssrn4" event={"ID":"db88cf84-bc92-4553-aec7-137af48fb72c","Type":"ContainerDied","Data":"be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3"} Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.255797 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f46cd574c-ssrn4" event={"ID":"db88cf84-bc92-4553-aec7-137af48fb72c","Type":"ContainerDied","Data":"7e2425fc0e43f4c3d19064a6d7f3a2cd4f31121c22fc673364cc6ee0d78837de"} Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.255821 4780 scope.go:117] "RemoveContainer" containerID="be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.295347 4780 scope.go:117] "RemoveContainer" containerID="be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3" Dec 05 08:24:51 crc kubenswrapper[4780]: E1205 08:24:51.301458 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3\": container with ID starting with be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3 not found: ID does not exist" containerID="be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.301533 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3"} err="failed to get container status \"be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3\": rpc error: code = NotFound desc = could not find container \"be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3\": container with ID starting with be8390eea53e59ef883c28eb3a4bafa7ea845581a15a77af7ddce18cb3cb2eb3 not found: ID does not exist" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.310552 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.324478 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data-custom\") pod \"db88cf84-bc92-4553-aec7-137af48fb72c\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.324653 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data\") pod \"db88cf84-bc92-4553-aec7-137af48fb72c\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.325701 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkk6n\" (UniqueName: \"kubernetes.io/projected/db88cf84-bc92-4553-aec7-137af48fb72c-kube-api-access-mkk6n\") pod \"db88cf84-bc92-4553-aec7-137af48fb72c\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.325857 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-combined-ca-bundle\") pod \"db88cf84-bc92-4553-aec7-137af48fb72c\" (UID: \"db88cf84-bc92-4553-aec7-137af48fb72c\") " Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.345605 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "db88cf84-bc92-4553-aec7-137af48fb72c" (UID: "db88cf84-bc92-4553-aec7-137af48fb72c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.349250 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db88cf84-bc92-4553-aec7-137af48fb72c-kube-api-access-mkk6n" (OuterVolumeSpecName: "kube-api-access-mkk6n") pod "db88cf84-bc92-4553-aec7-137af48fb72c" (UID: "db88cf84-bc92-4553-aec7-137af48fb72c"). InnerVolumeSpecName "kube-api-access-mkk6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.362150 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w58zl"] Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.384080 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db88cf84-bc92-4553-aec7-137af48fb72c" (UID: "db88cf84-bc92-4553-aec7-137af48fb72c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.394118 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data" (OuterVolumeSpecName: "config-data") pod "db88cf84-bc92-4553-aec7-137af48fb72c" (UID: "db88cf84-bc92-4553-aec7-137af48fb72c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.429194 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkk6n\" (UniqueName: \"kubernetes.io/projected/db88cf84-bc92-4553-aec7-137af48fb72c-kube-api-access-mkk6n\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.429228 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.429240 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.429253 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db88cf84-bc92-4553-aec7-137af48fb72c-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.592155 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f46cd574c-ssrn4"] Dec 05 08:24:51 crc kubenswrapper[4780]: I1205 08:24:51.601630 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6f46cd574c-ssrn4"] Dec 05 08:24:52 crc kubenswrapper[4780]: I1205 08:24:52.151284 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db88cf84-bc92-4553-aec7-137af48fb72c" path="/var/lib/kubelet/pods/db88cf84-bc92-4553-aec7-137af48fb72c/volumes" Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.272814 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w58zl" podUID="1b057958-a92b-434c-b605-3d2eb2178103" containerName="registry-server" containerID="cri-o://84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9" gracePeriod=2 Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.706221 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.895334 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-utilities\") pod \"1b057958-a92b-434c-b605-3d2eb2178103\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.895425 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-catalog-content\") pod \"1b057958-a92b-434c-b605-3d2eb2178103\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.895472 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg6tv\" (UniqueName: \"kubernetes.io/projected/1b057958-a92b-434c-b605-3d2eb2178103-kube-api-access-cg6tv\") pod \"1b057958-a92b-434c-b605-3d2eb2178103\" (UID: \"1b057958-a92b-434c-b605-3d2eb2178103\") " Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.896177 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-utilities" (OuterVolumeSpecName: "utilities") pod "1b057958-a92b-434c-b605-3d2eb2178103" (UID: "1b057958-a92b-434c-b605-3d2eb2178103"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.903698 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b057958-a92b-434c-b605-3d2eb2178103-kube-api-access-cg6tv" (OuterVolumeSpecName: "kube-api-access-cg6tv") pod "1b057958-a92b-434c-b605-3d2eb2178103" (UID: "1b057958-a92b-434c-b605-3d2eb2178103"). InnerVolumeSpecName "kube-api-access-cg6tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.941186 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b057958-a92b-434c-b605-3d2eb2178103" (UID: "1b057958-a92b-434c-b605-3d2eb2178103"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.998016 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.998066 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b057958-a92b-434c-b605-3d2eb2178103-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:53 crc kubenswrapper[4780]: I1205 08:24:53.998077 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg6tv\" (UniqueName: \"kubernetes.io/projected/1b057958-a92b-434c-b605-3d2eb2178103-kube-api-access-cg6tv\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.283436 4780 generic.go:334] "Generic (PLEG): container finished" podID="1b057958-a92b-434c-b605-3d2eb2178103" containerID="84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9" exitCode=0 Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.283478 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58zl" event={"ID":"1b057958-a92b-434c-b605-3d2eb2178103","Type":"ContainerDied","Data":"84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9"} Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.283515 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w58zl" event={"ID":"1b057958-a92b-434c-b605-3d2eb2178103","Type":"ContainerDied","Data":"61003ba2082ef2a329dded5b80133f60fe1227c20c110946a4980ad0beafcf21"} Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.283532 4780 scope.go:117] "RemoveContainer" containerID="84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9" Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.283644 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w58zl" Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.314277 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w58zl"] Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.315933 4780 scope.go:117] "RemoveContainer" containerID="cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1" Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.326001 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w58zl"] Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.335845 4780 scope.go:117] "RemoveContainer" containerID="3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa" Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.383347 4780 scope.go:117] "RemoveContainer" containerID="84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9" Dec 05 08:24:54 crc kubenswrapper[4780]: E1205 08:24:54.383920 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9\": container with ID starting with 84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9 not found: ID does not exist" containerID="84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9" Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.383975 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9"} err="failed to get container status \"84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9\": rpc error: code = NotFound desc = could not find container \"84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9\": container with ID starting with 84e51a5a84bd769a4f07eea79959687023cacb8d0c1f1c3fe62c69483df2fcd9 not found: ID does not exist" Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.384005 4780 scope.go:117] "RemoveContainer" containerID="cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1" Dec 05 08:24:54 crc kubenswrapper[4780]: E1205 08:24:54.384317 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1\": container with ID starting with cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1 not found: ID does not exist" containerID="cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1" Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.384351 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1"} err="failed to get container status \"cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1\": rpc error: code = NotFound desc = could not find container \"cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1\": container with ID starting with cb622833a486e085fa35abe66e09ff3f6cec71bb3328b4d82f2a5827210a26b1 not found: ID does not exist" Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.384366 4780 scope.go:117] "RemoveContainer" containerID="3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa" Dec 05 08:24:54 crc kubenswrapper[4780]: E1205 08:24:54.384679 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa\": container with ID starting with 3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa not found: ID does not exist" containerID="3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa" Dec 05 08:24:54 crc kubenswrapper[4780]: I1205 08:24:54.384698 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa"} err="failed to get container status \"3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa\": rpc error: code = NotFound desc = could not find container \"3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa\": container with ID starting with 3f6e753fde2cc19a6ac305b297f606b7186befdd0d2229c19507a57ee3fba6fa not found: ID does not exist" Dec 05 08:24:55 crc kubenswrapper[4780]: I1205 08:24:55.493262 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:24:56 crc kubenswrapper[4780]: I1205 08:24:56.149670 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b057958-a92b-434c-b605-3d2eb2178103" path="/var/lib/kubelet/pods/1b057958-a92b-434c-b605-3d2eb2178103/volumes" Dec 05 08:24:56 crc kubenswrapper[4780]: I1205 08:24:56.185222 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-f88df4d7b-87h9z" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.98:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.98:8443: connect: connection refused" Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.333377 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5894959478-n5k57" Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.391463 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-764bcd996f-zrjvc"] Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.457846 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-576847467c-n7nz9" Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.521485 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-56d94fcd9d-b8gml"] Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.822480 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.966756 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.981488 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmw26\" (UniqueName: \"kubernetes.io/projected/37084daf-78db-44a5-b894-e9baa0a4bf10-kube-api-access-vmw26\") pod \"37084daf-78db-44a5-b894-e9baa0a4bf10\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.981648 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data\") pod \"37084daf-78db-44a5-b894-e9baa0a4bf10\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.981709 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-combined-ca-bundle\") pod \"37084daf-78db-44a5-b894-e9baa0a4bf10\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.981732 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data-custom\") pod \"37084daf-78db-44a5-b894-e9baa0a4bf10\" (UID: \"37084daf-78db-44a5-b894-e9baa0a4bf10\") " Dec 05 08:24:57 crc kubenswrapper[4780]: I1205 08:24:57.991342 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "37084daf-78db-44a5-b894-e9baa0a4bf10" (UID: "37084daf-78db-44a5-b894-e9baa0a4bf10"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.002456 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37084daf-78db-44a5-b894-e9baa0a4bf10-kube-api-access-vmw26" (OuterVolumeSpecName: "kube-api-access-vmw26") pod "37084daf-78db-44a5-b894-e9baa0a4bf10" (UID: "37084daf-78db-44a5-b894-e9baa0a4bf10"). InnerVolumeSpecName "kube-api-access-vmw26". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.026994 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37084daf-78db-44a5-b894-e9baa0a4bf10" (UID: "37084daf-78db-44a5-b894-e9baa0a4bf10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.045611 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data" (OuterVolumeSpecName: "config-data") pod "37084daf-78db-44a5-b894-e9baa0a4bf10" (UID: "37084daf-78db-44a5-b894-e9baa0a4bf10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.083816 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpprz\" (UniqueName: \"kubernetes.io/projected/3c7df4ad-2d52-471f-bbcd-b58afc961f24-kube-api-access-zpprz\") pod \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.083907 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-combined-ca-bundle\") pod \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.083930 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data-custom\") pod \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.084016 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data\") pod \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\" (UID: \"3c7df4ad-2d52-471f-bbcd-b58afc961f24\") " Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.084531 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmw26\" (UniqueName: \"kubernetes.io/projected/37084daf-78db-44a5-b894-e9baa0a4bf10-kube-api-access-vmw26\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.084547 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.084556 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.084565 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37084daf-78db-44a5-b894-e9baa0a4bf10-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.086995 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3c7df4ad-2d52-471f-bbcd-b58afc961f24" (UID: "3c7df4ad-2d52-471f-bbcd-b58afc961f24"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.088530 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7df4ad-2d52-471f-bbcd-b58afc961f24-kube-api-access-zpprz" (OuterVolumeSpecName: "kube-api-access-zpprz") pod "3c7df4ad-2d52-471f-bbcd-b58afc961f24" (UID: "3c7df4ad-2d52-471f-bbcd-b58afc961f24"). InnerVolumeSpecName "kube-api-access-zpprz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.111655 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c7df4ad-2d52-471f-bbcd-b58afc961f24" (UID: "3c7df4ad-2d52-471f-bbcd-b58afc961f24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.143601 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data" (OuterVolumeSpecName: "config-data") pod "3c7df4ad-2d52-471f-bbcd-b58afc961f24" (UID: "3c7df4ad-2d52-471f-bbcd-b58afc961f24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.186711 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpprz\" (UniqueName: \"kubernetes.io/projected/3c7df4ad-2d52-471f-bbcd-b58afc961f24-kube-api-access-zpprz\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.186753 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.186766 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.186779 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7df4ad-2d52-471f-bbcd-b58afc961f24-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.316685 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" event={"ID":"3c7df4ad-2d52-471f-bbcd-b58afc961f24","Type":"ContainerDied","Data":"709fc0bd05a96278b3ed94dd0de2349fca6c2538eeb8c3e93e4bea48197f7c98"} Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.316708 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56d94fcd9d-b8gml" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.316795 4780 scope.go:117] "RemoveContainer" containerID="f5e7ab30f1850a09ca45c4ae245fa72edb05acbacf9abf08f42e6212f0fe3dc6" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.318600 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-764bcd996f-zrjvc" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.318594 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-764bcd996f-zrjvc" event={"ID":"37084daf-78db-44a5-b894-e9baa0a4bf10","Type":"ContainerDied","Data":"120ab9bd49df0bf06fa9d1c085842fc3529feb17a93aa2d66b03aadc6705291b"} Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.339424 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-764bcd996f-zrjvc"] Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.340090 4780 scope.go:117] "RemoveContainer" containerID="4f7009220a02273893394bf69ed9df4647a23effb16ad3e56a6eed526a5e9767" Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.348353 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-764bcd996f-zrjvc"] Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.358551 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-56d94fcd9d-b8gml"] Dec 05 08:24:58 crc kubenswrapper[4780]: I1205 08:24:58.368709 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-56d94fcd9d-b8gml"] Dec 05 08:24:59 crc kubenswrapper[4780]: I1205 08:24:59.907753 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:24:59 crc kubenswrapper[4780]: I1205 08:24:59.908073 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:25:00 crc kubenswrapper[4780]: I1205 08:25:00.149137 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37084daf-78db-44a5-b894-e9baa0a4bf10" path="/var/lib/kubelet/pods/37084daf-78db-44a5-b894-e9baa0a4bf10/volumes" Dec 05 08:25:00 crc kubenswrapper[4780]: I1205 08:25:00.149720 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" path="/var/lib/kubelet/pods/3c7df4ad-2d52-471f-bbcd-b58afc961f24/volumes" Dec 05 08:25:03 crc kubenswrapper[4780]: I1205 08:25:03.854040 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5dd47f8876-qrmx7" Dec 05 08:25:03 crc kubenswrapper[4780]: I1205 08:25:03.897398 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-77548cd754-2b6cx"] Dec 05 08:25:03 crc kubenswrapper[4780]: I1205 08:25:03.897623 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-77548cd754-2b6cx" podUID="b20abb6a-39dc-49a3-8289-01e85cd7de38" containerName="heat-engine" containerID="cri-o://78ef36564ec5cbb2673a28b50344efac4e7c3c7b5fa4f2a14a0ca53c5ebe1078" gracePeriod=60 Dec 05 08:25:05 crc kubenswrapper[4780]: E1205 08:25:05.479710 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78ef36564ec5cbb2673a28b50344efac4e7c3c7b5fa4f2a14a0ca53c5ebe1078" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 05 08:25:05 crc kubenswrapper[4780]: E1205 08:25:05.483407 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78ef36564ec5cbb2673a28b50344efac4e7c3c7b5fa4f2a14a0ca53c5ebe1078" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 05 08:25:05 crc kubenswrapper[4780]: E1205 08:25:05.484953 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78ef36564ec5cbb2673a28b50344efac4e7c3c7b5fa4f2a14a0ca53c5ebe1078" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 05 08:25:05 crc kubenswrapper[4780]: E1205 08:25:05.485026 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-77548cd754-2b6cx" podUID="b20abb6a-39dc-49a3-8289-01e85cd7de38" containerName="heat-engine" Dec 05 08:25:06 crc kubenswrapper[4780]: I1205 08:25:06.056678 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b3b-account-create-update-sphfv"] Dec 05 08:25:06 crc kubenswrapper[4780]: I1205 08:25:06.067463 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mtwcr"] Dec 05 08:25:06 crc kubenswrapper[4780]: I1205 08:25:06.082143 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mtwcr"] Dec 05 08:25:06 crc kubenswrapper[4780]: I1205 08:25:06.097742 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b3b-account-create-update-sphfv"] Dec 05 08:25:06 crc kubenswrapper[4780]: I1205 08:25:06.149215 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b8b038-0acd-4cf5-bc43-4f36c4577b7c" path="/var/lib/kubelet/pods/c8b8b038-0acd-4cf5-bc43-4f36c4577b7c/volumes" Dec 05 08:25:06 crc kubenswrapper[4780]: I1205 08:25:06.150293 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7eff8c1-ed87-418e-86b8-86f37004a4ef" path="/var/lib/kubelet/pods/d7eff8c1-ed87-418e-86b8-86f37004a4ef/volumes" Dec 05 08:25:06 crc kubenswrapper[4780]: I1205 08:25:06.185088 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-f88df4d7b-87h9z" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.98:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.98:8443: connect: connection refused" Dec 05 08:25:06 crc kubenswrapper[4780]: I1205 08:25:06.185211 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.534185 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67"] Dec 05 08:25:13 crc kubenswrapper[4780]: E1205 08:25:13.536246 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37084daf-78db-44a5-b894-e9baa0a4bf10" containerName="heat-api" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536276 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="37084daf-78db-44a5-b894-e9baa0a4bf10" containerName="heat-api" Dec 05 08:25:13 crc kubenswrapper[4780]: E1205 08:25:13.536290 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b057958-a92b-434c-b605-3d2eb2178103" containerName="extract-content" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536299 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b057958-a92b-434c-b605-3d2eb2178103" containerName="extract-content" Dec 05 08:25:13 crc kubenswrapper[4780]: E1205 08:25:13.536313 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" containerName="heat-cfnapi" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536321 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" containerName="heat-cfnapi" Dec 05 08:25:13 crc kubenswrapper[4780]: E1205 08:25:13.536333 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b057958-a92b-434c-b605-3d2eb2178103" containerName="extract-utilities" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536340 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b057958-a92b-434c-b605-3d2eb2178103" containerName="extract-utilities" Dec 05 08:25:13 crc kubenswrapper[4780]: E1205 08:25:13.536361 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" containerName="heat-cfnapi" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536367 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" containerName="heat-cfnapi" Dec 05 08:25:13 crc kubenswrapper[4780]: E1205 08:25:13.536379 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37084daf-78db-44a5-b894-e9baa0a4bf10" containerName="heat-api" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536386 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="37084daf-78db-44a5-b894-e9baa0a4bf10" containerName="heat-api" Dec 05 08:25:13 crc kubenswrapper[4780]: E1205 08:25:13.536399 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b057958-a92b-434c-b605-3d2eb2178103" containerName="registry-server" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536406 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b057958-a92b-434c-b605-3d2eb2178103" containerName="registry-server" Dec 05 08:25:13 crc kubenswrapper[4780]: E1205 08:25:13.536423 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56294b1-f726-4951-a56d-4f02584c11cf" containerName="heat-cfnapi" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536429 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56294b1-f726-4951-a56d-4f02584c11cf" containerName="heat-cfnapi" Dec 05 08:25:13 crc kubenswrapper[4780]: E1205 08:25:13.536460 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db88cf84-bc92-4553-aec7-137af48fb72c" containerName="heat-api" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536467 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="db88cf84-bc92-4553-aec7-137af48fb72c" containerName="heat-api" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536670 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="37084daf-78db-44a5-b894-e9baa0a4bf10" containerName="heat-api" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536687 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" containerName="heat-cfnapi" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536700 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="db88cf84-bc92-4553-aec7-137af48fb72c" containerName="heat-api" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536711 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b057958-a92b-434c-b605-3d2eb2178103" containerName="registry-server" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536724 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56294b1-f726-4951-a56d-4f02584c11cf" containerName="heat-cfnapi" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.536737 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c7df4ad-2d52-471f-bbcd-b58afc961f24" containerName="heat-cfnapi" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.537215 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="37084daf-78db-44a5-b894-e9baa0a4bf10" containerName="heat-api" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.538269 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.542806 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67"] Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.543922 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.693327 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsmhs\" (UniqueName: \"kubernetes.io/projected/136bf2be-c593-48b8-9531-e92937442594-kube-api-access-zsmhs\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.693396 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.693522 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.797252 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsmhs\" (UniqueName: \"kubernetes.io/projected/136bf2be-c593-48b8-9531-e92937442594-kube-api-access-zsmhs\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.797333 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.797366 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.797829 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.798024 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.816961 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsmhs\" (UniqueName: \"kubernetes.io/projected/136bf2be-c593-48b8-9531-e92937442594-kube-api-access-zsmhs\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:13 crc kubenswrapper[4780]: I1205 08:25:13.872218 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.330599 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67"] Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.498160 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" event={"ID":"136bf2be-c593-48b8-9531-e92937442594","Type":"ContainerStarted","Data":"142db2c442bac7c8caf83277540c2a06351d72716479a4c3e75f417291afe11b"} Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.500442 4780 generic.go:334] "Generic (PLEG): container finished" podID="b20abb6a-39dc-49a3-8289-01e85cd7de38" containerID="78ef36564ec5cbb2673a28b50344efac4e7c3c7b5fa4f2a14a0ca53c5ebe1078" exitCode=0 Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.500478 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77548cd754-2b6cx" event={"ID":"b20abb6a-39dc-49a3-8289-01e85cd7de38","Type":"ContainerDied","Data":"78ef36564ec5cbb2673a28b50344efac4e7c3c7b5fa4f2a14a0ca53c5ebe1078"} Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.698578 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.819026 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-combined-ca-bundle\") pod \"b20abb6a-39dc-49a3-8289-01e85cd7de38\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.819418 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data-custom\") pod \"b20abb6a-39dc-49a3-8289-01e85cd7de38\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.819659 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddhxj\" (UniqueName: \"kubernetes.io/projected/b20abb6a-39dc-49a3-8289-01e85cd7de38-kube-api-access-ddhxj\") pod \"b20abb6a-39dc-49a3-8289-01e85cd7de38\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.819693 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data\") pod \"b20abb6a-39dc-49a3-8289-01e85cd7de38\" (UID: \"b20abb6a-39dc-49a3-8289-01e85cd7de38\") " Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.825524 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b20abb6a-39dc-49a3-8289-01e85cd7de38-kube-api-access-ddhxj" (OuterVolumeSpecName: "kube-api-access-ddhxj") pod "b20abb6a-39dc-49a3-8289-01e85cd7de38" (UID: "b20abb6a-39dc-49a3-8289-01e85cd7de38"). InnerVolumeSpecName "kube-api-access-ddhxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.832736 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b20abb6a-39dc-49a3-8289-01e85cd7de38" (UID: "b20abb6a-39dc-49a3-8289-01e85cd7de38"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.851833 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b20abb6a-39dc-49a3-8289-01e85cd7de38" (UID: "b20abb6a-39dc-49a3-8289-01e85cd7de38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.873111 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data" (OuterVolumeSpecName: "config-data") pod "b20abb6a-39dc-49a3-8289-01e85cd7de38" (UID: "b20abb6a-39dc-49a3-8289-01e85cd7de38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.922855 4780 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.922919 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddhxj\" (UniqueName: \"kubernetes.io/projected/b20abb6a-39dc-49a3-8289-01e85cd7de38-kube-api-access-ddhxj\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.922935 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:14 crc kubenswrapper[4780]: I1205 08:25:14.922946 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b20abb6a-39dc-49a3-8289-01e85cd7de38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:15 crc kubenswrapper[4780]: I1205 08:25:15.039285 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lmzsz"] Dec 05 08:25:15 crc kubenswrapper[4780]: I1205 08:25:15.050222 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lmzsz"] Dec 05 08:25:15 crc kubenswrapper[4780]: I1205 08:25:15.511309 4780 generic.go:334] "Generic (PLEG): container finished" podID="136bf2be-c593-48b8-9531-e92937442594" containerID="2f7dfa384a47c361c375e036338adfdd56aa82a63fcdbdb1dea8e02f7bd297a4" exitCode=0 Dec 05 08:25:15 crc kubenswrapper[4780]: I1205 08:25:15.511394 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" event={"ID":"136bf2be-c593-48b8-9531-e92937442594","Type":"ContainerDied","Data":"2f7dfa384a47c361c375e036338adfdd56aa82a63fcdbdb1dea8e02f7bd297a4"} Dec 05 08:25:15 crc kubenswrapper[4780]: I1205 08:25:15.514606 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-77548cd754-2b6cx" event={"ID":"b20abb6a-39dc-49a3-8289-01e85cd7de38","Type":"ContainerDied","Data":"16fba49ea1be7a58f067545b7baa4c6ce2915ce96fb2e4e633b0414c5aa8683f"} Dec 05 08:25:15 crc kubenswrapper[4780]: I1205 08:25:15.514654 4780 scope.go:117] "RemoveContainer" containerID="78ef36564ec5cbb2673a28b50344efac4e7c3c7b5fa4f2a14a0ca53c5ebe1078" Dec 05 08:25:15 crc kubenswrapper[4780]: I1205 08:25:15.514869 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-77548cd754-2b6cx" Dec 05 08:25:15 crc kubenswrapper[4780]: I1205 08:25:15.550705 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-77548cd754-2b6cx"] Dec 05 08:25:15 crc kubenswrapper[4780]: I1205 08:25:15.560138 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-77548cd754-2b6cx"] Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.152238 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f776a1f-03be-4240-8169-09cc7aaf98f7" path="/var/lib/kubelet/pods/5f776a1f-03be-4240-8169-09cc7aaf98f7/volumes" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.152927 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b20abb6a-39dc-49a3-8289-01e85cd7de38" path="/var/lib/kubelet/pods/b20abb6a-39dc-49a3-8289-01e85cd7de38/volumes" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.185319 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-f88df4d7b-87h9z" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.98:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.98:8443: connect: connection refused" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.525839 4780 generic.go:334] "Generic (PLEG): container finished" podID="f2c00068-c308-4340-9d9a-58430981cadf" containerID="5303de4171d90ef2e51e37ab70c95e47fdbd5177f692ed0623b5c07fdaa8883d" exitCode=137 Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.525947 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f88df4d7b-87h9z" event={"ID":"f2c00068-c308-4340-9d9a-58430981cadf","Type":"ContainerDied","Data":"5303de4171d90ef2e51e37ab70c95e47fdbd5177f692ed0623b5c07fdaa8883d"} Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.525999 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f88df4d7b-87h9z" event={"ID":"f2c00068-c308-4340-9d9a-58430981cadf","Type":"ContainerDied","Data":"48ffb3da5787f550ced9b03cd27c9a8725e16f09976fd42f54b8bee872a66038"} Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.526012 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48ffb3da5787f550ced9b03cd27c9a8725e16f09976fd42f54b8bee872a66038" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.590645 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.759082 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-secret-key\") pod \"f2c00068-c308-4340-9d9a-58430981cadf\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.759180 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-tls-certs\") pod \"f2c00068-c308-4340-9d9a-58430981cadf\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.759286 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-combined-ca-bundle\") pod \"f2c00068-c308-4340-9d9a-58430981cadf\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.759350 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-888ph\" (UniqueName: \"kubernetes.io/projected/f2c00068-c308-4340-9d9a-58430981cadf-kube-api-access-888ph\") pod \"f2c00068-c308-4340-9d9a-58430981cadf\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.759386 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-scripts\") pod \"f2c00068-c308-4340-9d9a-58430981cadf\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.759419 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-config-data\") pod \"f2c00068-c308-4340-9d9a-58430981cadf\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.759442 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2c00068-c308-4340-9d9a-58430981cadf-logs\") pod \"f2c00068-c308-4340-9d9a-58430981cadf\" (UID: \"f2c00068-c308-4340-9d9a-58430981cadf\") " Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.760185 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2c00068-c308-4340-9d9a-58430981cadf-logs" (OuterVolumeSpecName: "logs") pod "f2c00068-c308-4340-9d9a-58430981cadf" (UID: "f2c00068-c308-4340-9d9a-58430981cadf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.765004 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f2c00068-c308-4340-9d9a-58430981cadf" (UID: "f2c00068-c308-4340-9d9a-58430981cadf"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.766627 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c00068-c308-4340-9d9a-58430981cadf-kube-api-access-888ph" (OuterVolumeSpecName: "kube-api-access-888ph") pod "f2c00068-c308-4340-9d9a-58430981cadf" (UID: "f2c00068-c308-4340-9d9a-58430981cadf"). InnerVolumeSpecName "kube-api-access-888ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.788423 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-scripts" (OuterVolumeSpecName: "scripts") pod "f2c00068-c308-4340-9d9a-58430981cadf" (UID: "f2c00068-c308-4340-9d9a-58430981cadf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.790142 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-config-data" (OuterVolumeSpecName: "config-data") pod "f2c00068-c308-4340-9d9a-58430981cadf" (UID: "f2c00068-c308-4340-9d9a-58430981cadf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.791564 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2c00068-c308-4340-9d9a-58430981cadf" (UID: "f2c00068-c308-4340-9d9a-58430981cadf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.812540 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "f2c00068-c308-4340-9d9a-58430981cadf" (UID: "f2c00068-c308-4340-9d9a-58430981cadf"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.863158 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.863233 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-888ph\" (UniqueName: \"kubernetes.io/projected/f2c00068-c308-4340-9d9a-58430981cadf-kube-api-access-888ph\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.863261 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.863279 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f2c00068-c308-4340-9d9a-58430981cadf-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.863295 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2c00068-c308-4340-9d9a-58430981cadf-logs\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.863312 4780 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:16 crc kubenswrapper[4780]: I1205 08:25:16.863328 4780 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2c00068-c308-4340-9d9a-58430981cadf-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:17 crc kubenswrapper[4780]: I1205 08:25:17.538919 4780 generic.go:334] "Generic (PLEG): container finished" podID="136bf2be-c593-48b8-9531-e92937442594" containerID="8e2ac8ae46cec2409614a83fa35c826ffa7b2cd647025b0d2c5e80be9c294e1b" exitCode=0 Dec 05 08:25:17 crc kubenswrapper[4780]: I1205 08:25:17.538991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" event={"ID":"136bf2be-c593-48b8-9531-e92937442594","Type":"ContainerDied","Data":"8e2ac8ae46cec2409614a83fa35c826ffa7b2cd647025b0d2c5e80be9c294e1b"} Dec 05 08:25:17 crc kubenswrapper[4780]: I1205 08:25:17.540648 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f88df4d7b-87h9z" Dec 05 08:25:17 crc kubenswrapper[4780]: I1205 08:25:17.604948 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f88df4d7b-87h9z"] Dec 05 08:25:17 crc kubenswrapper[4780]: I1205 08:25:17.614705 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f88df4d7b-87h9z"] Dec 05 08:25:18 crc kubenswrapper[4780]: I1205 08:25:18.148602 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2c00068-c308-4340-9d9a-58430981cadf" path="/var/lib/kubelet/pods/f2c00068-c308-4340-9d9a-58430981cadf/volumes" Dec 05 08:25:18 crc kubenswrapper[4780]: I1205 08:25:18.551382 4780 generic.go:334] "Generic (PLEG): container finished" podID="136bf2be-c593-48b8-9531-e92937442594" containerID="b6aef5b1f227e4a61b304073ad887c39195541360899da28582216e5e77530e1" exitCode=0 Dec 05 08:25:18 crc kubenswrapper[4780]: I1205 08:25:18.551454 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" event={"ID":"136bf2be-c593-48b8-9531-e92937442594","Type":"ContainerDied","Data":"b6aef5b1f227e4a61b304073ad887c39195541360899da28582216e5e77530e1"} Dec 05 08:25:19 crc kubenswrapper[4780]: I1205 08:25:19.854118 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:19 crc kubenswrapper[4780]: I1205 08:25:19.925682 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-util\") pod \"136bf2be-c593-48b8-9531-e92937442594\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " Dec 05 08:25:19 crc kubenswrapper[4780]: I1205 08:25:19.926156 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-bundle\") pod \"136bf2be-c593-48b8-9531-e92937442594\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " Dec 05 08:25:19 crc kubenswrapper[4780]: I1205 08:25:19.926221 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsmhs\" (UniqueName: \"kubernetes.io/projected/136bf2be-c593-48b8-9531-e92937442594-kube-api-access-zsmhs\") pod \"136bf2be-c593-48b8-9531-e92937442594\" (UID: \"136bf2be-c593-48b8-9531-e92937442594\") " Dec 05 08:25:19 crc kubenswrapper[4780]: I1205 08:25:19.929853 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-bundle" (OuterVolumeSpecName: "bundle") pod "136bf2be-c593-48b8-9531-e92937442594" (UID: "136bf2be-c593-48b8-9531-e92937442594"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:25:19 crc kubenswrapper[4780]: I1205 08:25:19.933326 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/136bf2be-c593-48b8-9531-e92937442594-kube-api-access-zsmhs" (OuterVolumeSpecName: "kube-api-access-zsmhs") pod "136bf2be-c593-48b8-9531-e92937442594" (UID: "136bf2be-c593-48b8-9531-e92937442594"). InnerVolumeSpecName "kube-api-access-zsmhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:25:20 crc kubenswrapper[4780]: I1205 08:25:20.009894 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-util" (OuterVolumeSpecName: "util") pod "136bf2be-c593-48b8-9531-e92937442594" (UID: "136bf2be-c593-48b8-9531-e92937442594"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:25:20 crc kubenswrapper[4780]: I1205 08:25:20.029970 4780 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-util\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:20 crc kubenswrapper[4780]: I1205 08:25:20.030251 4780 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/136bf2be-c593-48b8-9531-e92937442594-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:20 crc kubenswrapper[4780]: I1205 08:25:20.030390 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsmhs\" (UniqueName: \"kubernetes.io/projected/136bf2be-c593-48b8-9531-e92937442594-kube-api-access-zsmhs\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:20 crc kubenswrapper[4780]: I1205 08:25:20.571058 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" event={"ID":"136bf2be-c593-48b8-9531-e92937442594","Type":"ContainerDied","Data":"142db2c442bac7c8caf83277540c2a06351d72716479a4c3e75f417291afe11b"} Dec 05 08:25:20 crc kubenswrapper[4780]: I1205 08:25:20.571126 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="142db2c442bac7c8caf83277540c2a06351d72716479a4c3e75f417291afe11b" Dec 05 08:25:20 crc kubenswrapper[4780]: I1205 08:25:20.571142 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67" Dec 05 08:25:29 crc kubenswrapper[4780]: I1205 08:25:29.849232 4780 scope.go:117] "RemoveContainer" containerID="bf4b59ac7e3470e0773e802e944c8190cd8f71d1fc2543bc1ed726b473ced0ec" Dec 05 08:25:29 crc kubenswrapper[4780]: I1205 08:25:29.907535 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:25:29 crc kubenswrapper[4780]: I1205 08:25:29.907620 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:25:29 crc kubenswrapper[4780]: I1205 08:25:29.914836 4780 scope.go:117] "RemoveContainer" containerID="d6d25aac74204f4a91291434fdc4f513ab5476d6a2d123880697e20668c3290a" Dec 05 08:25:29 crc kubenswrapper[4780]: I1205 08:25:29.979905 4780 scope.go:117] "RemoveContainer" containerID="b9a486fe16ae93d036150541afa1a42229631d7e764e62cfaad9d7fa9fcf55cd" Dec 05 08:25:30 crc kubenswrapper[4780]: I1205 08:25:30.087027 4780 scope.go:117] "RemoveContainer" containerID="9efe5c5171261230c6db5a9ffa7cee26d391fbd6ec29600a264fbe9ffe9a0dcd" Dec 05 08:25:30 crc kubenswrapper[4780]: I1205 08:25:30.121250 4780 scope.go:117] "RemoveContainer" containerID="ff58aeaac57a573ee7aef08ed754f87e2359dc26f44625e37e8b4c252c8002c8" Dec 05 08:25:30 crc kubenswrapper[4780]: I1205 08:25:30.186349 4780 scope.go:117] "RemoveContainer" containerID="7fd05480d6762d8769840effa3a0fa102bb6e34ee4f083c54e3d5b46577d207d" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.003770 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9"] Dec 05 08:25:32 crc kubenswrapper[4780]: E1205 08:25:32.004568 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20abb6a-39dc-49a3-8289-01e85cd7de38" containerName="heat-engine" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.004586 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20abb6a-39dc-49a3-8289-01e85cd7de38" containerName="heat-engine" Dec 05 08:25:32 crc kubenswrapper[4780]: E1205 08:25:32.004595 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136bf2be-c593-48b8-9531-e92937442594" containerName="util" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.004603 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="136bf2be-c593-48b8-9531-e92937442594" containerName="util" Dec 05 08:25:32 crc kubenswrapper[4780]: E1205 08:25:32.004625 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136bf2be-c593-48b8-9531-e92937442594" containerName="extract" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.004633 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="136bf2be-c593-48b8-9531-e92937442594" containerName="extract" Dec 05 08:25:32 crc kubenswrapper[4780]: E1205 08:25:32.004689 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.004698 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon" Dec 05 08:25:32 crc kubenswrapper[4780]: E1205 08:25:32.004719 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon-log" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.004726 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon-log" Dec 05 08:25:32 crc kubenswrapper[4780]: E1205 08:25:32.004742 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136bf2be-c593-48b8-9531-e92937442594" containerName="pull" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.004749 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="136bf2be-c593-48b8-9531-e92937442594" containerName="pull" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.004976 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon-log" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.005000 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="136bf2be-c593-48b8-9531-e92937442594" containerName="extract" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.005015 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2c00068-c308-4340-9d9a-58430981cadf" containerName="horizon" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.005030 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b20abb6a-39dc-49a3-8289-01e85cd7de38" containerName="heat-engine" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.005904 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.008642 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.009660 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.009705 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-jbxhv" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.017149 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9"] Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.079391 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dmw\" (UniqueName: \"kubernetes.io/projected/aed1375c-8cad-45e8-b1e1-9ffce12b6191-kube-api-access-p9dmw\") pod \"obo-prometheus-operator-668cf9dfbb-nqsf9\" (UID: \"aed1375c-8cad-45e8-b1e1-9ffce12b6191\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.327292 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9dmw\" (UniqueName: \"kubernetes.io/projected/aed1375c-8cad-45e8-b1e1-9ffce12b6191-kube-api-access-p9dmw\") pod \"obo-prometheus-operator-668cf9dfbb-nqsf9\" (UID: \"aed1375c-8cad-45e8-b1e1-9ffce12b6191\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.366756 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9dmw\" (UniqueName: \"kubernetes.io/projected/aed1375c-8cad-45e8-b1e1-9ffce12b6191-kube-api-access-p9dmw\") pod \"obo-prometheus-operator-668cf9dfbb-nqsf9\" (UID: \"aed1375c-8cad-45e8-b1e1-9ffce12b6191\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.401386 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8"] Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.402790 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.405061 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.405274 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-z4pwc" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.412820 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc"] Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.423710 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8"] Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.423836 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.429774 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2af3a1a3-37b1-4fec-a413-f898353aa3f8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8\" (UID: \"2af3a1a3-37b1-4fec-a413-f898353aa3f8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.429859 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0730c81-1449-4ce1-a29e-a5a57e06b444-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc\" (UID: \"b0730c81-1449-4ce1-a29e-a5a57e06b444\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.431360 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0730c81-1449-4ce1-a29e-a5a57e06b444-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc\" (UID: \"b0730c81-1449-4ce1-a29e-a5a57e06b444\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.441780 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2af3a1a3-37b1-4fec-a413-f898353aa3f8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8\" (UID: \"2af3a1a3-37b1-4fec-a413-f898353aa3f8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.460956 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc"] Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.544261 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0730c81-1449-4ce1-a29e-a5a57e06b444-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc\" (UID: \"b0730c81-1449-4ce1-a29e-a5a57e06b444\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.544321 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2af3a1a3-37b1-4fec-a413-f898353aa3f8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8\" (UID: \"2af3a1a3-37b1-4fec-a413-f898353aa3f8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.544432 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2af3a1a3-37b1-4fec-a413-f898353aa3f8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8\" (UID: \"2af3a1a3-37b1-4fec-a413-f898353aa3f8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.544478 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0730c81-1449-4ce1-a29e-a5a57e06b444-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc\" (UID: \"b0730c81-1449-4ce1-a29e-a5a57e06b444\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.555827 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2af3a1a3-37b1-4fec-a413-f898353aa3f8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8\" (UID: \"2af3a1a3-37b1-4fec-a413-f898353aa3f8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.557332 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b0730c81-1449-4ce1-a29e-a5a57e06b444-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc\" (UID: \"b0730c81-1449-4ce1-a29e-a5a57e06b444\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.559021 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b0730c81-1449-4ce1-a29e-a5a57e06b444-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc\" (UID: \"b0730c81-1449-4ce1-a29e-a5a57e06b444\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.559355 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2af3a1a3-37b1-4fec-a413-f898353aa3f8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8\" (UID: \"2af3a1a3-37b1-4fec-a413-f898353aa3f8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.563951 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-vvh49"] Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.565353 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.569733 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7sjgx" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.569800 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.588359 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-vvh49"] Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.626533 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.737924 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.748268 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m648r\" (UniqueName: \"kubernetes.io/projected/66757364-1218-441c-8f46-57bbd91142f8-kube-api-access-m648r\") pod \"observability-operator-d8bb48f5d-vvh49\" (UID: \"66757364-1218-441c-8f46-57bbd91142f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.748571 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/66757364-1218-441c-8f46-57bbd91142f8-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-vvh49\" (UID: \"66757364-1218-441c-8f46-57bbd91142f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.760895 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.783927 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-hvxtn"] Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.785110 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-hvxtn" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.789364 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-2r5qt" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.795229 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-hvxtn"] Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.850034 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m648r\" (UniqueName: \"kubernetes.io/projected/66757364-1218-441c-8f46-57bbd91142f8-kube-api-access-m648r\") pod \"observability-operator-d8bb48f5d-vvh49\" (UID: \"66757364-1218-441c-8f46-57bbd91142f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.850404 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/66757364-1218-441c-8f46-57bbd91142f8-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-vvh49\" (UID: \"66757364-1218-441c-8f46-57bbd91142f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.856910 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/66757364-1218-441c-8f46-57bbd91142f8-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-vvh49\" (UID: \"66757364-1218-441c-8f46-57bbd91142f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.881853 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m648r\" (UniqueName: \"kubernetes.io/projected/66757364-1218-441c-8f46-57bbd91142f8-kube-api-access-m648r\") pod \"observability-operator-d8bb48f5d-vvh49\" (UID: \"66757364-1218-441c-8f46-57bbd91142f8\") " pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.957952 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hrq\" (UniqueName: \"kubernetes.io/projected/c72d36a9-d0a9-4cea-9cbe-930e2435e813-kube-api-access-v7hrq\") pod \"perses-operator-5446b9c989-hvxtn\" (UID: \"c72d36a9-d0a9-4cea-9cbe-930e2435e813\") " pod="openshift-operators/perses-operator-5446b9c989-hvxtn" Dec 05 08:25:32 crc kubenswrapper[4780]: I1205 08:25:32.959714 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c72d36a9-d0a9-4cea-9cbe-930e2435e813-openshift-service-ca\") pod \"perses-operator-5446b9c989-hvxtn\" (UID: \"c72d36a9-d0a9-4cea-9cbe-930e2435e813\") " pod="openshift-operators/perses-operator-5446b9c989-hvxtn" Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.063062 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7hrq\" (UniqueName: \"kubernetes.io/projected/c72d36a9-d0a9-4cea-9cbe-930e2435e813-kube-api-access-v7hrq\") pod \"perses-operator-5446b9c989-hvxtn\" (UID: \"c72d36a9-d0a9-4cea-9cbe-930e2435e813\") " pod="openshift-operators/perses-operator-5446b9c989-hvxtn" Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.063203 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c72d36a9-d0a9-4cea-9cbe-930e2435e813-openshift-service-ca\") pod \"perses-operator-5446b9c989-hvxtn\" (UID: \"c72d36a9-d0a9-4cea-9cbe-930e2435e813\") " pod="openshift-operators/perses-operator-5446b9c989-hvxtn" Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.064241 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c72d36a9-d0a9-4cea-9cbe-930e2435e813-openshift-service-ca\") pod \"perses-operator-5446b9c989-hvxtn\" (UID: \"c72d36a9-d0a9-4cea-9cbe-930e2435e813\") " pod="openshift-operators/perses-operator-5446b9c989-hvxtn" Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.083678 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7hrq\" (UniqueName: \"kubernetes.io/projected/c72d36a9-d0a9-4cea-9cbe-930e2435e813-kube-api-access-v7hrq\") pod \"perses-operator-5446b9c989-hvxtn\" (UID: \"c72d36a9-d0a9-4cea-9cbe-930e2435e813\") " pod="openshift-operators/perses-operator-5446b9c989-hvxtn" Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.096410 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.125462 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-hvxtn" Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.380210 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9"] Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.511641 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc"] Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.622175 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8"] Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.773130 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" event={"ID":"b0730c81-1449-4ce1-a29e-a5a57e06b444","Type":"ContainerStarted","Data":"5406a2d14a4e29b0332ea455e52034f7686c4365ac6c5da3c9f0d6f4128ea648"} Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.775764 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" event={"ID":"2af3a1a3-37b1-4fec-a413-f898353aa3f8","Type":"ContainerStarted","Data":"64b5b259810631b7d4ae71f29e8f9b7e07d78bc8f1ead89000b7035707141a1f"} Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.781079 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-vvh49"] Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.781686 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9" event={"ID":"aed1375c-8cad-45e8-b1e1-9ffce12b6191","Type":"ContainerStarted","Data":"ad9d87d9b28248a4a4ddeafebfb4921ef9031b525123fe270dcd6f34e80c7347"} Dec 05 08:25:33 crc kubenswrapper[4780]: I1205 08:25:33.789704 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-hvxtn"] Dec 05 08:25:34 crc kubenswrapper[4780]: I1205 08:25:34.794520 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-hvxtn" event={"ID":"c72d36a9-d0a9-4cea-9cbe-930e2435e813","Type":"ContainerStarted","Data":"c751d6e5f6a03cc13eeddd95514d5a490e6c65145e9b61350d21d46457281d25"} Dec 05 08:25:34 crc kubenswrapper[4780]: I1205 08:25:34.796285 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" event={"ID":"66757364-1218-441c-8f46-57bbd91142f8","Type":"ContainerStarted","Data":"10a11e375477857235716db3f65bd8c23fa74b85aaca8c4808a5fad0d1480f2b"} Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.878842 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" event={"ID":"66757364-1218-441c-8f46-57bbd91142f8","Type":"ContainerStarted","Data":"a7deb909ebebfd12694343affb545b35a5caa0b39f6f86d7bec257a4962d8c0c"} Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.880010 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.881763 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9" event={"ID":"aed1375c-8cad-45e8-b1e1-9ffce12b6191","Type":"ContainerStarted","Data":"ac6ebc747a60bc785509b4f325685bec5567a8951333eacc63a174cd57faed31"} Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.884206 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" event={"ID":"b0730c81-1449-4ce1-a29e-a5a57e06b444","Type":"ContainerStarted","Data":"10d45818025830e6955d69aa89f0a5bc66b1d80952d8bd73c36bb24e7d991b21"} Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.884462 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.885981 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" event={"ID":"2af3a1a3-37b1-4fec-a413-f898353aa3f8","Type":"ContainerStarted","Data":"d68b09127285d83b3801c395625d36c762e154e45e158d6381dedd68051c8e4f"} Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.888665 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-hvxtn" event={"ID":"c72d36a9-d0a9-4cea-9cbe-930e2435e813","Type":"ContainerStarted","Data":"1048ef1d4356580d5c10946665806b7d549ab007bc94a2a2ecd9729aa6071f94"} Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.889206 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-hvxtn" Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.923861 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-vvh49" podStartSLOduration=2.562434462 podStartE2EDuration="10.923838993s" podCreationTimestamp="2025-12-05 08:25:32 +0000 UTC" firstStartedPulling="2025-12-05 08:25:33.761544679 +0000 UTC m=+5967.831061011" lastFinishedPulling="2025-12-05 08:25:42.12294921 +0000 UTC m=+5976.192465542" observedRunningTime="2025-12-05 08:25:42.909710579 +0000 UTC m=+5976.979226921" watchObservedRunningTime="2025-12-05 08:25:42.923838993 +0000 UTC m=+5976.993355325" Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.942126 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-hvxtn" podStartSLOduration=2.694033751 podStartE2EDuration="10.94210768s" podCreationTimestamp="2025-12-05 08:25:32 +0000 UTC" firstStartedPulling="2025-12-05 08:25:33.805019991 +0000 UTC m=+5967.874536323" lastFinishedPulling="2025-12-05 08:25:42.05309392 +0000 UTC m=+5976.122610252" observedRunningTime="2025-12-05 08:25:42.933182208 +0000 UTC m=+5977.002698540" watchObservedRunningTime="2025-12-05 08:25:42.94210768 +0000 UTC m=+5977.011624012" Dec 05 08:25:42 crc kubenswrapper[4780]: I1205 08:25:42.976943 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc" podStartSLOduration=2.538300076 podStartE2EDuration="10.969222318s" podCreationTimestamp="2025-12-05 08:25:32 +0000 UTC" firstStartedPulling="2025-12-05 08:25:33.554760465 +0000 UTC m=+5967.624276797" lastFinishedPulling="2025-12-05 08:25:41.985682707 +0000 UTC m=+5976.055199039" observedRunningTime="2025-12-05 08:25:42.962908016 +0000 UTC m=+5977.032424358" watchObservedRunningTime="2025-12-05 08:25:42.969222318 +0000 UTC m=+5977.038738650" Dec 05 08:25:43 crc kubenswrapper[4780]: I1205 08:25:43.095643 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-nqsf9" podStartSLOduration=3.44284311 podStartE2EDuration="12.095622996s" podCreationTimestamp="2025-12-05 08:25:31 +0000 UTC" firstStartedPulling="2025-12-05 08:25:33.400059217 +0000 UTC m=+5967.469575549" lastFinishedPulling="2025-12-05 08:25:42.052839103 +0000 UTC m=+5976.122355435" observedRunningTime="2025-12-05 08:25:43.052227476 +0000 UTC m=+5977.121743808" watchObservedRunningTime="2025-12-05 08:25:43.095622996 +0000 UTC m=+5977.165139318" Dec 05 08:25:53 crc kubenswrapper[4780]: I1205 08:25:53.129445 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-hvxtn" Dec 05 08:25:53 crc kubenswrapper[4780]: I1205 08:25:53.183036 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8" podStartSLOduration=12.78333302 podStartE2EDuration="21.183016882s" podCreationTimestamp="2025-12-05 08:25:32 +0000 UTC" firstStartedPulling="2025-12-05 08:25:33.586029795 +0000 UTC m=+5967.655546127" lastFinishedPulling="2025-12-05 08:25:41.985713657 +0000 UTC m=+5976.055229989" observedRunningTime="2025-12-05 08:25:43.104174779 +0000 UTC m=+5977.173691121" watchObservedRunningTime="2025-12-05 08:25:53.183016882 +0000 UTC m=+5987.252533224" Dec 05 08:25:55 crc kubenswrapper[4780]: I1205 08:25:55.939523 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 05 08:25:55 crc kubenswrapper[4780]: I1205 08:25:55.940214 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="e6c5dc61-e7ce-4343-b825-1a91fd8016a9" containerName="openstackclient" containerID="cri-o://7d81b6d424c7e115ba243794f0155f4a28119eaff4267a1435f37de6bc2e4a02" gracePeriod=2 Dec 05 08:25:55 crc kubenswrapper[4780]: I1205 08:25:55.954694 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 05 08:25:55 crc kubenswrapper[4780]: I1205 08:25:55.982728 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 05 08:25:55 crc kubenswrapper[4780]: E1205 08:25:55.983145 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c5dc61-e7ce-4343-b825-1a91fd8016a9" containerName="openstackclient" Dec 05 08:25:55 crc kubenswrapper[4780]: I1205 08:25:55.983167 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c5dc61-e7ce-4343-b825-1a91fd8016a9" containerName="openstackclient" Dec 05 08:25:55 crc kubenswrapper[4780]: I1205 08:25:55.983369 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c5dc61-e7ce-4343-b825-1a91fd8016a9" containerName="openstackclient" Dec 05 08:25:55 crc kubenswrapper[4780]: I1205 08:25:55.984055 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 08:25:55 crc kubenswrapper[4780]: I1205 08:25:55.990591 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="e6c5dc61-e7ce-4343-b825-1a91fd8016a9" podUID="6f0b5da2-8b8e-4293-97a7-3109575ece16" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.009389 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.117855 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvql\" (UniqueName: \"kubernetes.io/projected/6f0b5da2-8b8e-4293-97a7-3109575ece16-kube-api-access-wvvql\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.117930 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.117963 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.117994 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.220507 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvvql\" (UniqueName: \"kubernetes.io/projected/6f0b5da2-8b8e-4293-97a7-3109575ece16-kube-api-access-wvvql\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.220576 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.220622 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.220665 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.225923 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.228648 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.232896 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.279327 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvvql\" (UniqueName: \"kubernetes.io/projected/6f0b5da2-8b8e-4293-97a7-3109575ece16-kube-api-access-wvvql\") pod \"openstackclient\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.317687 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.322607 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.336867 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.352107 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-rvsdl" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.390952 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.425370 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfqkz\" (UniqueName: \"kubernetes.io/projected/a8bd5357-473e-47a9-baeb-38c14f8a7570-kube-api-access-cfqkz\") pod \"kube-state-metrics-0\" (UID: \"a8bd5357-473e-47a9-baeb-38c14f8a7570\") " pod="openstack/kube-state-metrics-0" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.530122 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfqkz\" (UniqueName: \"kubernetes.io/projected/a8bd5357-473e-47a9-baeb-38c14f8a7570-kube-api-access-cfqkz\") pod \"kube-state-metrics-0\" (UID: \"a8bd5357-473e-47a9-baeb-38c14f8a7570\") " pod="openstack/kube-state-metrics-0" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.583873 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfqkz\" (UniqueName: \"kubernetes.io/projected/a8bd5357-473e-47a9-baeb-38c14f8a7570-kube-api-access-cfqkz\") pod \"kube-state-metrics-0\" (UID: \"a8bd5357-473e-47a9-baeb-38c14f8a7570\") " pod="openstack/kube-state-metrics-0" Dec 05 08:25:56 crc kubenswrapper[4780]: I1205 08:25:56.692410 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.389345 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 08:25:57 crc kubenswrapper[4780]: W1205 08:25:57.399016 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f0b5da2_8b8e_4293_97a7_3109575ece16.slice/crio-a7c9136c18dc86221a6fa2a50cd473c7d28632b203c6ca549673179dfbb79027 WatchSource:0}: Error finding container a7c9136c18dc86221a6fa2a50cd473c7d28632b203c6ca549673179dfbb79027: Status 404 returned error can't find the container with id a7c9136c18dc86221a6fa2a50cd473c7d28632b203c6ca549673179dfbb79027 Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.484618 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.502942 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.508379 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.508480 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.511676 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.512265 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.513645 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-w6z8h" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.526356 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.591545 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/93e192aa-558f-423c-9ed8-d0e110dab4fc-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.591614 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/93e192aa-558f-423c-9ed8-d0e110dab4fc-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.591681 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/93e192aa-558f-423c-9ed8-d0e110dab4fc-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.591800 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/93e192aa-558f-423c-9ed8-d0e110dab4fc-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.591861 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/93e192aa-558f-423c-9ed8-d0e110dab4fc-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.591914 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xvlp\" (UniqueName: \"kubernetes.io/projected/93e192aa-558f-423c-9ed8-d0e110dab4fc-kube-api-access-7xvlp\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.591962 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/93e192aa-558f-423c-9ed8-d0e110dab4fc-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.652663 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.700111 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/93e192aa-558f-423c-9ed8-d0e110dab4fc-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.700186 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/93e192aa-558f-423c-9ed8-d0e110dab4fc-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.700258 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/93e192aa-558f-423c-9ed8-d0e110dab4fc-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.700399 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/93e192aa-558f-423c-9ed8-d0e110dab4fc-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.700452 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/93e192aa-558f-423c-9ed8-d0e110dab4fc-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.700511 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xvlp\" (UniqueName: \"kubernetes.io/projected/93e192aa-558f-423c-9ed8-d0e110dab4fc-kube-api-access-7xvlp\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.700572 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/93e192aa-558f-423c-9ed8-d0e110dab4fc-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.703778 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/93e192aa-558f-423c-9ed8-d0e110dab4fc-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.707408 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/93e192aa-558f-423c-9ed8-d0e110dab4fc-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.708186 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/93e192aa-558f-423c-9ed8-d0e110dab4fc-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.718503 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/93e192aa-558f-423c-9ed8-d0e110dab4fc-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.719432 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/93e192aa-558f-423c-9ed8-d0e110dab4fc-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.720466 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/93e192aa-558f-423c-9ed8-d0e110dab4fc-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.747074 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xvlp\" (UniqueName: \"kubernetes.io/projected/93e192aa-558f-423c-9ed8-d0e110dab4fc-kube-api-access-7xvlp\") pod \"alertmanager-metric-storage-0\" (UID: \"93e192aa-558f-423c-9ed8-d0e110dab4fc\") " pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.888801 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.962842 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.968869 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.977944 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.978001 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.978044 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.978363 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zdgzr" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.978531 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 05 08:25:57 crc kubenswrapper[4780]: I1205 08:25:57.978548 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.005260 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.104055 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8bd5357-473e-47a9-baeb-38c14f8a7570","Type":"ContainerStarted","Data":"a268da03f650a5abb3fa6616c220d72b8fc29f3982b7f2c9bf19bc4c32df07b9"} Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.106769 4780 generic.go:334] "Generic (PLEG): container finished" podID="e6c5dc61-e7ce-4343-b825-1a91fd8016a9" containerID="7d81b6d424c7e115ba243794f0155f4a28119eaff4267a1435f37de6bc2e4a02" exitCode=137 Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.107922 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6f0b5da2-8b8e-4293-97a7-3109575ece16","Type":"ContainerStarted","Data":"c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9"} Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.107959 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6f0b5da2-8b8e-4293-97a7-3109575ece16","Type":"ContainerStarted","Data":"a7c9136c18dc86221a6fa2a50cd473c7d28632b203c6ca549673179dfbb79027"} Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.110102 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8cf0500e-1110-41fe-bc99-285a68379741-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.110166 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.110244 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.110274 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.110303 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.110357 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8cf0500e-1110-41fe-bc99-285a68379741-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.110386 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.110466 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk599\" (UniqueName: \"kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-kube-api-access-lk599\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.124851 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.124835335 podStartE2EDuration="3.124835335s" podCreationTimestamp="2025-12-05 08:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:25:58.124079683 +0000 UTC m=+5992.193596005" watchObservedRunningTime="2025-12-05 08:25:58.124835335 +0000 UTC m=+5992.194351667" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.211913 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.211976 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.212031 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.212084 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8cf0500e-1110-41fe-bc99-285a68379741-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.212104 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.212168 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk599\" (UniqueName: \"kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-kube-api-access-lk599\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.212284 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8cf0500e-1110-41fe-bc99-285a68379741-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.212344 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.214006 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8cf0500e-1110-41fe-bc99-285a68379741-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.222110 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.224380 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8cf0500e-1110-41fe-bc99-285a68379741-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.224477 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.224739 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-config\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.225154 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.226335 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.226374 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4bd8751af295539aab84a588ec3ce7ce55cafd5cbe44348a9b9f2b73158eb5b0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.232532 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk599\" (UniqueName: \"kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-kube-api-access-lk599\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.301129 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") pod \"prometheus-metric-storage-0\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.356978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.468365 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.567828 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.571308 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="e6c5dc61-e7ce-4343-b825-1a91fd8016a9" podUID="6f0b5da2-8b8e-4293-97a7-3109575ece16" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.728206 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-combined-ca-bundle\") pod \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.728656 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rth8\" (UniqueName: \"kubernetes.io/projected/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-kube-api-access-5rth8\") pod \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.728766 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config-secret\") pod \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.728937 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config\") pod \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\" (UID: \"e6c5dc61-e7ce-4343-b825-1a91fd8016a9\") " Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.735870 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-kube-api-access-5rth8" (OuterVolumeSpecName: "kube-api-access-5rth8") pod "e6c5dc61-e7ce-4343-b825-1a91fd8016a9" (UID: "e6c5dc61-e7ce-4343-b825-1a91fd8016a9"). InnerVolumeSpecName "kube-api-access-5rth8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.765774 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e6c5dc61-e7ce-4343-b825-1a91fd8016a9" (UID: "e6c5dc61-e7ce-4343-b825-1a91fd8016a9"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.796981 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6c5dc61-e7ce-4343-b825-1a91fd8016a9" (UID: "e6c5dc61-e7ce-4343-b825-1a91fd8016a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.828934 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e6c5dc61-e7ce-4343-b825-1a91fd8016a9" (UID: "e6c5dc61-e7ce-4343-b825-1a91fd8016a9"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.838461 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.838495 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.838508 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rth8\" (UniqueName: \"kubernetes.io/projected/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-kube-api-access-5rth8\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.838520 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e6c5dc61-e7ce-4343-b825-1a91fd8016a9-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 05 08:25:58 crc kubenswrapper[4780]: I1205 08:25:58.980115 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 05 08:25:58 crc kubenswrapper[4780]: W1205 08:25:58.981533 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cf0500e_1110_41fe_bc99_285a68379741.slice/crio-33fbd268acf97610925cb122874c81bf6d069651b5e9d795624b0dc6025b3bf8 WatchSource:0}: Error finding container 33fbd268acf97610925cb122874c81bf6d069651b5e9d795624b0dc6025b3bf8: Status 404 returned error can't find the container with id 33fbd268acf97610925cb122874c81bf6d069651b5e9d795624b0dc6025b3bf8 Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.119598 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8bd5357-473e-47a9-baeb-38c14f8a7570","Type":"ContainerStarted","Data":"94f4e85529bac103a463abb40844cf471cfbb9f555c6b2a7eceba2419966f41f"} Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.119699 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.121354 4780 scope.go:117] "RemoveContainer" containerID="7d81b6d424c7e115ba243794f0155f4a28119eaff4267a1435f37de6bc2e4a02" Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.121362 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.123200 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cf0500e-1110-41fe-bc99-285a68379741","Type":"ContainerStarted","Data":"33fbd268acf97610925cb122874c81bf6d069651b5e9d795624b0dc6025b3bf8"} Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.127113 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"93e192aa-558f-423c-9ed8-d0e110dab4fc","Type":"ContainerStarted","Data":"1db7c0af720ae3dbebb28eb0453b09ac291c3c4933286e7649e94bece2b1a018"} Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.142471 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="e6c5dc61-e7ce-4343-b825-1a91fd8016a9" podUID="6f0b5da2-8b8e-4293-97a7-3109575ece16" Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.151128 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.493109911 podStartE2EDuration="3.151107558s" podCreationTimestamp="2025-12-05 08:25:56 +0000 UTC" firstStartedPulling="2025-12-05 08:25:57.673997232 +0000 UTC m=+5991.743513564" lastFinishedPulling="2025-12-05 08:25:58.331994879 +0000 UTC m=+5992.401511211" observedRunningTime="2025-12-05 08:25:59.137057806 +0000 UTC m=+5993.206574138" watchObservedRunningTime="2025-12-05 08:25:59.151107558 +0000 UTC m=+5993.220623890" Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.913464 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.914845 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.914916 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.917782 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 08:25:59 crc kubenswrapper[4780]: I1205 08:25:59.917936 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" gracePeriod=600 Dec 05 08:26:00 crc kubenswrapper[4780]: E1205 08:26:00.122008 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:26:00 crc kubenswrapper[4780]: I1205 08:26:00.154089 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" exitCode=0 Dec 05 08:26:00 crc kubenswrapper[4780]: I1205 08:26:00.158732 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6c5dc61-e7ce-4343-b825-1a91fd8016a9" path="/var/lib/kubelet/pods/e6c5dc61-e7ce-4343-b825-1a91fd8016a9/volumes" Dec 05 08:26:00 crc kubenswrapper[4780]: I1205 08:26:00.159308 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690"} Dec 05 08:26:00 crc kubenswrapper[4780]: I1205 08:26:00.159353 4780 scope.go:117] "RemoveContainer" containerID="349e89f9246808105ec0ea65aea43e2605c8616dad9fd9a33f62e7fb3ca35e96" Dec 05 08:26:00 crc kubenswrapper[4780]: I1205 08:26:00.160327 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:26:00 crc kubenswrapper[4780]: E1205 08:26:00.160715 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:26:05 crc kubenswrapper[4780]: I1205 08:26:05.199790 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"93e192aa-558f-423c-9ed8-d0e110dab4fc","Type":"ContainerStarted","Data":"33792738c2c69a4079966b58878ace4ed8e672fff82f4329218f1eaa7bda7332"} Dec 05 08:26:05 crc kubenswrapper[4780]: I1205 08:26:05.202271 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cf0500e-1110-41fe-bc99-285a68379741","Type":"ContainerStarted","Data":"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59"} Dec 05 08:26:06 crc kubenswrapper[4780]: I1205 08:26:06.697651 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 08:26:11 crc kubenswrapper[4780]: I1205 08:26:11.261488 4780 generic.go:334] "Generic (PLEG): container finished" podID="93e192aa-558f-423c-9ed8-d0e110dab4fc" containerID="33792738c2c69a4079966b58878ace4ed8e672fff82f4329218f1eaa7bda7332" exitCode=0 Dec 05 08:26:11 crc kubenswrapper[4780]: I1205 08:26:11.262188 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"93e192aa-558f-423c-9ed8-d0e110dab4fc","Type":"ContainerDied","Data":"33792738c2c69a4079966b58878ace4ed8e672fff82f4329218f1eaa7bda7332"} Dec 05 08:26:11 crc kubenswrapper[4780]: I1205 08:26:11.273656 4780 generic.go:334] "Generic (PLEG): container finished" podID="8cf0500e-1110-41fe-bc99-285a68379741" containerID="fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59" exitCode=0 Dec 05 08:26:11 crc kubenswrapper[4780]: I1205 08:26:11.273767 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cf0500e-1110-41fe-bc99-285a68379741","Type":"ContainerDied","Data":"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59"} Dec 05 08:26:12 crc kubenswrapper[4780]: I1205 08:26:12.141310 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:26:12 crc kubenswrapper[4780]: E1205 08:26:12.141628 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:26:14 crc kubenswrapper[4780]: I1205 08:26:14.305490 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"93e192aa-558f-423c-9ed8-d0e110dab4fc","Type":"ContainerStarted","Data":"f4c1bffbee7e0f2b9aa9c177ec0f94984c6fc89703402aaa8d63d7da0a23a548"} Dec 05 08:26:19 crc kubenswrapper[4780]: I1205 08:26:19.355556 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"93e192aa-558f-423c-9ed8-d0e110dab4fc","Type":"ContainerStarted","Data":"8b71b3e3a8ca0d2770a1662b391f8419e19f9d6ee2e20d2e184540585dcaf3dd"} Dec 05 08:26:19 crc kubenswrapper[4780]: I1205 08:26:19.356502 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Dec 05 08:26:19 crc kubenswrapper[4780]: I1205 08:26:19.362216 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Dec 05 08:26:19 crc kubenswrapper[4780]: I1205 08:26:19.385444 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=7.396815885 podStartE2EDuration="22.385425329s" podCreationTimestamp="2025-12-05 08:25:57 +0000 UTC" firstStartedPulling="2025-12-05 08:25:58.476007036 +0000 UTC m=+5992.545523368" lastFinishedPulling="2025-12-05 08:26:13.46461648 +0000 UTC m=+6007.534132812" observedRunningTime="2025-12-05 08:26:19.384237767 +0000 UTC m=+6013.453754099" watchObservedRunningTime="2025-12-05 08:26:19.385425329 +0000 UTC m=+6013.454941661" Dec 05 08:26:20 crc kubenswrapper[4780]: I1205 08:26:20.366198 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cf0500e-1110-41fe-bc99-285a68379741","Type":"ContainerStarted","Data":"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955"} Dec 05 08:26:24 crc kubenswrapper[4780]: I1205 08:26:24.421238 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cf0500e-1110-41fe-bc99-285a68379741","Type":"ContainerStarted","Data":"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843"} Dec 05 08:26:26 crc kubenswrapper[4780]: I1205 08:26:26.147278 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:26:26 crc kubenswrapper[4780]: E1205 08:26:26.147854 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:26:27 crc kubenswrapper[4780]: I1205 08:26:27.449410 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cf0500e-1110-41fe-bc99-285a68379741","Type":"ContainerStarted","Data":"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041"} Dec 05 08:26:27 crc kubenswrapper[4780]: I1205 08:26:27.479470 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.470569927 podStartE2EDuration="31.479443928s" podCreationTimestamp="2025-12-05 08:25:56 +0000 UTC" firstStartedPulling="2025-12-05 08:25:58.984281411 +0000 UTC m=+5993.053797743" lastFinishedPulling="2025-12-05 08:26:26.993155412 +0000 UTC m=+6021.062671744" observedRunningTime="2025-12-05 08:26:27.474432222 +0000 UTC m=+6021.543948554" watchObservedRunningTime="2025-12-05 08:26:27.479443928 +0000 UTC m=+6021.548960270" Dec 05 08:26:28 crc kubenswrapper[4780]: I1205 08:26:28.357690 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:28 crc kubenswrapper[4780]: I1205 08:26:28.358021 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:28 crc kubenswrapper[4780]: I1205 08:26:28.360760 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:28 crc kubenswrapper[4780]: I1205 08:26:28.460131 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.729411 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.729652 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="6f0b5da2-8b8e-4293-97a7-3109575ece16" containerName="openstackclient" containerID="cri-o://c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9" gracePeriod=2 Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.741510 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.776125 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 05 08:26:29 crc kubenswrapper[4780]: E1205 08:26:29.776547 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0b5da2-8b8e-4293-97a7-3109575ece16" containerName="openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.776564 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0b5da2-8b8e-4293-97a7-3109575ece16" containerName="openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.776789 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0b5da2-8b8e-4293-97a7-3109575ece16" containerName="openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.777700 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.787827 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.803655 4780 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6f0b5da2-8b8e-4293-97a7-3109575ece16" podUID="f25089e3-336d-4e60-a932-2f027ad4d516" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.860133 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25089e3-336d-4e60-a932-2f027ad4d516-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.860217 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f25089e3-336d-4e60-a932-2f027ad4d516-openstack-config\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.860333 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzjl2\" (UniqueName: \"kubernetes.io/projected/f25089e3-336d-4e60-a932-2f027ad4d516-kube-api-access-kzjl2\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.860433 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f25089e3-336d-4e60-a932-2f027ad4d516-openstack-config-secret\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.962425 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25089e3-336d-4e60-a932-2f027ad4d516-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.962925 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f25089e3-336d-4e60-a932-2f027ad4d516-openstack-config\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.963018 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzjl2\" (UniqueName: \"kubernetes.io/projected/f25089e3-336d-4e60-a932-2f027ad4d516-kube-api-access-kzjl2\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.963096 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f25089e3-336d-4e60-a932-2f027ad4d516-openstack-config-secret\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.964016 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f25089e3-336d-4e60-a932-2f027ad4d516-openstack-config\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.970935 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25089e3-336d-4e60-a932-2f027ad4d516-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.975579 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f25089e3-336d-4e60-a932-2f027ad4d516-openstack-config-secret\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:29 crc kubenswrapper[4780]: I1205 08:26:29.985284 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzjl2\" (UniqueName: \"kubernetes.io/projected/f25089e3-336d-4e60-a932-2f027ad4d516-kube-api-access-kzjl2\") pod \"openstackclient\" (UID: \"f25089e3-336d-4e60-a932-2f027ad4d516\") " pod="openstack/openstackclient" Dec 05 08:26:30 crc kubenswrapper[4780]: I1205 08:26:30.114339 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 08:26:30 crc kubenswrapper[4780]: I1205 08:26:30.794996 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 08:26:31 crc kubenswrapper[4780]: I1205 08:26:31.098656 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 05 08:26:31 crc kubenswrapper[4780]: I1205 08:26:31.509842 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f25089e3-336d-4e60-a932-2f027ad4d516","Type":"ContainerStarted","Data":"5599a2973ad7ce455ca080ae325e4584e156028fcb6cc775a0db5ba585ef87e1"} Dec 05 08:26:31 crc kubenswrapper[4780]: I1205 08:26:31.509905 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f25089e3-336d-4e60-a932-2f027ad4d516","Type":"ContainerStarted","Data":"4013d3848bc3f1489ccce16c3eb7c18b9de39e04c53a04e8874a7ca4342d2a4c"} Dec 05 08:26:31 crc kubenswrapper[4780]: I1205 08:26:31.510162 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="prometheus" containerID="cri-o://3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955" gracePeriod=600 Dec 05 08:26:31 crc kubenswrapper[4780]: I1205 08:26:31.510377 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="thanos-sidecar" containerID="cri-o://7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041" gracePeriod=600 Dec 05 08:26:31 crc kubenswrapper[4780]: I1205 08:26:31.510402 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="config-reloader" containerID="cri-o://f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843" gracePeriod=600 Dec 05 08:26:31 crc kubenswrapper[4780]: I1205 08:26:31.542907 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.542873168 podStartE2EDuration="2.542873168s" podCreationTimestamp="2025-12-05 08:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:26:31.53299462 +0000 UTC m=+6025.602510952" watchObservedRunningTime="2025-12-05 08:26:31.542873168 +0000 UTC m=+6025.612389490" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.133086 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.214004 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-combined-ca-bundle\") pod \"6f0b5da2-8b8e-4293-97a7-3109575ece16\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.214098 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config\") pod \"6f0b5da2-8b8e-4293-97a7-3109575ece16\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.214134 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvvql\" (UniqueName: \"kubernetes.io/projected/6f0b5da2-8b8e-4293-97a7-3109575ece16-kube-api-access-wvvql\") pod \"6f0b5da2-8b8e-4293-97a7-3109575ece16\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.214329 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config-secret\") pod \"6f0b5da2-8b8e-4293-97a7-3109575ece16\" (UID: \"6f0b5da2-8b8e-4293-97a7-3109575ece16\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.250669 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f0b5da2-8b8e-4293-97a7-3109575ece16" (UID: "6f0b5da2-8b8e-4293-97a7-3109575ece16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.262684 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0b5da2-8b8e-4293-97a7-3109575ece16-kube-api-access-wvvql" (OuterVolumeSpecName: "kube-api-access-wvvql") pod "6f0b5da2-8b8e-4293-97a7-3109575ece16" (UID: "6f0b5da2-8b8e-4293-97a7-3109575ece16"). InnerVolumeSpecName "kube-api-access-wvvql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.265861 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6f0b5da2-8b8e-4293-97a7-3109575ece16" (UID: "6f0b5da2-8b8e-4293-97a7-3109575ece16"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.283273 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6f0b5da2-8b8e-4293-97a7-3109575ece16" (UID: "6f0b5da2-8b8e-4293-97a7-3109575ece16"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.317543 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.317579 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0b5da2-8b8e-4293-97a7-3109575ece16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.317591 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f0b5da2-8b8e-4293-97a7-3109575ece16-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.317602 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvvql\" (UniqueName: \"kubernetes.io/projected/6f0b5da2-8b8e-4293-97a7-3109575ece16-kube-api-access-wvvql\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.509786 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.526507 4780 generic.go:334] "Generic (PLEG): container finished" podID="6f0b5da2-8b8e-4293-97a7-3109575ece16" containerID="c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9" exitCode=137 Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.526596 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.526604 4780 scope.go:117] "RemoveContainer" containerID="c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.530028 4780 generic.go:334] "Generic (PLEG): container finished" podID="8cf0500e-1110-41fe-bc99-285a68379741" containerID="7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041" exitCode=0 Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.530076 4780 generic.go:334] "Generic (PLEG): container finished" podID="8cf0500e-1110-41fe-bc99-285a68379741" containerID="f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843" exitCode=0 Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.530089 4780 generic.go:334] "Generic (PLEG): container finished" podID="8cf0500e-1110-41fe-bc99-285a68379741" containerID="3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955" exitCode=0 Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.530160 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.530220 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cf0500e-1110-41fe-bc99-285a68379741","Type":"ContainerDied","Data":"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041"} Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.530282 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cf0500e-1110-41fe-bc99-285a68379741","Type":"ContainerDied","Data":"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843"} Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.530303 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cf0500e-1110-41fe-bc99-285a68379741","Type":"ContainerDied","Data":"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955"} Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.530318 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8cf0500e-1110-41fe-bc99-285a68379741","Type":"ContainerDied","Data":"33fbd268acf97610925cb122874c81bf6d069651b5e9d795624b0dc6025b3bf8"} Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.572656 4780 scope.go:117] "RemoveContainer" containerID="c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9" Dec 05 08:26:32 crc kubenswrapper[4780]: E1205 08:26:32.573501 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9\": container with ID starting with c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9 not found: ID does not exist" containerID="c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.573545 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9"} err="failed to get container status \"c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9\": rpc error: code = NotFound desc = could not find container \"c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9\": container with ID starting with c83e90b00cd7f8d97163d82a23715e67a7532a1511602e75085b001a833092a9 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.573577 4780 scope.go:117] "RemoveContainer" containerID="7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.624199 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-thanos-prometheus-http-client-file\") pod \"8cf0500e-1110-41fe-bc99-285a68379741\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.624441 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8cf0500e-1110-41fe-bc99-285a68379741-config-out\") pod \"8cf0500e-1110-41fe-bc99-285a68379741\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.624769 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") pod \"8cf0500e-1110-41fe-bc99-285a68379741\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.624831 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8cf0500e-1110-41fe-bc99-285a68379741-prometheus-metric-storage-rulefiles-0\") pod \"8cf0500e-1110-41fe-bc99-285a68379741\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.625067 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-tls-assets\") pod \"8cf0500e-1110-41fe-bc99-285a68379741\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.625137 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-web-config\") pod \"8cf0500e-1110-41fe-bc99-285a68379741\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.625347 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-config\") pod \"8cf0500e-1110-41fe-bc99-285a68379741\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.625411 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk599\" (UniqueName: \"kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-kube-api-access-lk599\") pod \"8cf0500e-1110-41fe-bc99-285a68379741\" (UID: \"8cf0500e-1110-41fe-bc99-285a68379741\") " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.629360 4780 scope.go:117] "RemoveContainer" containerID="f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.631464 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cf0500e-1110-41fe-bc99-285a68379741-config-out" (OuterVolumeSpecName: "config-out") pod "8cf0500e-1110-41fe-bc99-285a68379741" (UID: "8cf0500e-1110-41fe-bc99-285a68379741"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.633191 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cf0500e-1110-41fe-bc99-285a68379741-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "8cf0500e-1110-41fe-bc99-285a68379741" (UID: "8cf0500e-1110-41fe-bc99-285a68379741"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.633323 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "8cf0500e-1110-41fe-bc99-285a68379741" (UID: "8cf0500e-1110-41fe-bc99-285a68379741"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.634144 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-config" (OuterVolumeSpecName: "config") pod "8cf0500e-1110-41fe-bc99-285a68379741" (UID: "8cf0500e-1110-41fe-bc99-285a68379741"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.636044 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "8cf0500e-1110-41fe-bc99-285a68379741" (UID: "8cf0500e-1110-41fe-bc99-285a68379741"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.640243 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-kube-api-access-lk599" (OuterVolumeSpecName: "kube-api-access-lk599") pod "8cf0500e-1110-41fe-bc99-285a68379741" (UID: "8cf0500e-1110-41fe-bc99-285a68379741"). InnerVolumeSpecName "kube-api-access-lk599". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.651717 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "8cf0500e-1110-41fe-bc99-285a68379741" (UID: "8cf0500e-1110-41fe-bc99-285a68379741"). InnerVolumeSpecName "pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.683087 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-web-config" (OuterVolumeSpecName: "web-config") pod "8cf0500e-1110-41fe-bc99-285a68379741" (UID: "8cf0500e-1110-41fe-bc99-285a68379741"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.683393 4780 scope.go:117] "RemoveContainer" containerID="3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.724732 4780 scope.go:117] "RemoveContainer" containerID="fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.728674 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.728721 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk599\" (UniqueName: \"kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-kube-api-access-lk599\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.728742 4780 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.728761 4780 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8cf0500e-1110-41fe-bc99-285a68379741-config-out\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.728826 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") on node \"crc\" " Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.728849 4780 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8cf0500e-1110-41fe-bc99-285a68379741-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.728863 4780 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8cf0500e-1110-41fe-bc99-285a68379741-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.729003 4780 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8cf0500e-1110-41fe-bc99-285a68379741-web-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.757947 4780 scope.go:117] "RemoveContainer" containerID="7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041" Dec 05 08:26:32 crc kubenswrapper[4780]: E1205 08:26:32.758422 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041\": container with ID starting with 7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041 not found: ID does not exist" containerID="7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.758466 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041"} err="failed to get container status \"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041\": rpc error: code = NotFound desc = could not find container \"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041\": container with ID starting with 7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.758492 4780 scope.go:117] "RemoveContainer" containerID="f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843" Dec 05 08:26:32 crc kubenswrapper[4780]: E1205 08:26:32.758771 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843\": container with ID starting with f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843 not found: ID does not exist" containerID="f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.758827 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843"} err="failed to get container status \"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843\": rpc error: code = NotFound desc = could not find container \"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843\": container with ID starting with f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.758864 4780 scope.go:117] "RemoveContainer" containerID="3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955" Dec 05 08:26:32 crc kubenswrapper[4780]: E1205 08:26:32.759198 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955\": container with ID starting with 3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955 not found: ID does not exist" containerID="3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.759238 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955"} err="failed to get container status \"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955\": rpc error: code = NotFound desc = could not find container \"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955\": container with ID starting with 3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.759266 4780 scope.go:117] "RemoveContainer" containerID="fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59" Dec 05 08:26:32 crc kubenswrapper[4780]: E1205 08:26:32.759596 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59\": container with ID starting with fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59 not found: ID does not exist" containerID="fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.759618 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59"} err="failed to get container status \"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59\": rpc error: code = NotFound desc = could not find container \"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59\": container with ID starting with fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.759636 4780 scope.go:117] "RemoveContainer" containerID="7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.760229 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041"} err="failed to get container status \"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041\": rpc error: code = NotFound desc = could not find container \"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041\": container with ID starting with 7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.760256 4780 scope.go:117] "RemoveContainer" containerID="f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.760712 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843"} err="failed to get container status \"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843\": rpc error: code = NotFound desc = could not find container \"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843\": container with ID starting with f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.760735 4780 scope.go:117] "RemoveContainer" containerID="3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.761120 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955"} err="failed to get container status \"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955\": rpc error: code = NotFound desc = could not find container \"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955\": container with ID starting with 3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.761193 4780 scope.go:117] "RemoveContainer" containerID="fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.761571 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59"} err="failed to get container status \"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59\": rpc error: code = NotFound desc = could not find container \"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59\": container with ID starting with fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.761597 4780 scope.go:117] "RemoveContainer" containerID="7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.762045 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041"} err="failed to get container status \"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041\": rpc error: code = NotFound desc = could not find container \"7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041\": container with ID starting with 7047874b13af3521a4cb1a57680fafd16ea2223eabe42c96f558f61c4ebad041 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.762069 4780 scope.go:117] "RemoveContainer" containerID="f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.762300 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843"} err="failed to get container status \"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843\": rpc error: code = NotFound desc = could not find container \"f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843\": container with ID starting with f4c94aca0484b5525a7fa3de0de97378e1dcc68b4c60164c6f3798ae31abc843 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.762322 4780 scope.go:117] "RemoveContainer" containerID="3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.762522 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955"} err="failed to get container status \"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955\": rpc error: code = NotFound desc = could not find container \"3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955\": container with ID starting with 3543f33c5637579e54e642ee622fce4b1ea78abe7c54e24526155bb0a5620955 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.762546 4780 scope.go:117] "RemoveContainer" containerID="fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.762709 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59"} err="failed to get container status \"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59\": rpc error: code = NotFound desc = could not find container \"fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59\": container with ID starting with fa67091ec9ffd8bc48512b6e4a8803cefdd48322b59a91a8c68f0ceb8a8f4c59 not found: ID does not exist" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.766746 4780 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.766936 4780 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2") on node "crc" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.832046 4780 reconciler_common.go:293] "Volume detached for volume \"pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") on node \"crc\" DevicePath \"\"" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.865777 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.879094 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.898771 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 05 08:26:32 crc kubenswrapper[4780]: E1205 08:26:32.899416 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="thanos-sidecar" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.899444 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="thanos-sidecar" Dec 05 08:26:32 crc kubenswrapper[4780]: E1205 08:26:32.899482 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="config-reloader" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.899498 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="config-reloader" Dec 05 08:26:32 crc kubenswrapper[4780]: E1205 08:26:32.899516 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="prometheus" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.899524 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="prometheus" Dec 05 08:26:32 crc kubenswrapper[4780]: E1205 08:26:32.899555 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="init-config-reloader" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.899565 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="init-config-reloader" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.899822 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="prometheus" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.899864 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="config-reloader" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.899898 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cf0500e-1110-41fe-bc99-285a68379741" containerName="thanos-sidecar" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.903005 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.905163 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.906570 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.907011 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.909185 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zdgzr" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.910361 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.910502 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.916871 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 05 08:26:32 crc kubenswrapper[4780]: I1205 08:26:32.917655 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.035392 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.035429 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.035458 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.035496 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j48l\" (UniqueName: \"kubernetes.io/projected/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-kube-api-access-8j48l\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.035525 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.035858 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.035978 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.036172 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.036224 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.036253 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.036341 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.137642 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.137681 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.137714 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.137758 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.137806 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.137830 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.137865 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j48l\" (UniqueName: \"kubernetes.io/projected/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-kube-api-access-8j48l\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.137948 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.138011 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.138043 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.138096 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.140293 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.142522 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.143585 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-config\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.143625 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.144581 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.144784 4780 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.144823 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4bd8751af295539aab84a588ec3ce7ce55cafd5cbe44348a9b9f2b73158eb5b0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.147497 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.147664 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.151626 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.155802 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.158956 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j48l\" (UniqueName: \"kubernetes.io/projected/ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5-kube-api-access-8j48l\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.215824 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c21cac0-7c92-456e-8383-4ac03711b9a2\") pod \"prometheus-metric-storage-0\" (UID: \"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5\") " pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.240698 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:33 crc kubenswrapper[4780]: I1205 08:26:33.741811 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 05 08:26:33 crc kubenswrapper[4780]: W1205 08:26:33.743473 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca2c18fc_c708_43b7_bdbc_cc1cb92fb1d5.slice/crio-f7915cbadd0c9c5004f0922d9cce56880038b614317c97cfc2bc37247a227758 WatchSource:0}: Error finding container f7915cbadd0c9c5004f0922d9cce56880038b614317c97cfc2bc37247a227758: Status 404 returned error can't find the container with id f7915cbadd0c9c5004f0922d9cce56880038b614317c97cfc2bc37247a227758 Dec 05 08:26:34 crc kubenswrapper[4780]: I1205 08:26:34.148033 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0b5da2-8b8e-4293-97a7-3109575ece16" path="/var/lib/kubelet/pods/6f0b5da2-8b8e-4293-97a7-3109575ece16/volumes" Dec 05 08:26:34 crc kubenswrapper[4780]: I1205 08:26:34.148666 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cf0500e-1110-41fe-bc99-285a68379741" path="/var/lib/kubelet/pods/8cf0500e-1110-41fe-bc99-285a68379741/volumes" Dec 05 08:26:34 crc kubenswrapper[4780]: I1205 08:26:34.552558 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5","Type":"ContainerStarted","Data":"f7915cbadd0c9c5004f0922d9cce56880038b614317c97cfc2bc37247a227758"} Dec 05 08:26:37 crc kubenswrapper[4780]: I1205 08:26:37.581646 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5","Type":"ContainerStarted","Data":"732e0b42199d096835cb203444346ead028c8c414ea24c3d8c4a7406e5ca58d2"} Dec 05 08:26:39 crc kubenswrapper[4780]: I1205 08:26:39.050965 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9cf9-account-create-update-dgcgh"] Dec 05 08:26:39 crc kubenswrapper[4780]: I1205 08:26:39.061774 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-bkzrr"] Dec 05 08:26:39 crc kubenswrapper[4780]: I1205 08:26:39.071006 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9cf9-account-create-update-dgcgh"] Dec 05 08:26:39 crc kubenswrapper[4780]: I1205 08:26:39.091291 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-bkzrr"] Dec 05 08:26:40 crc kubenswrapper[4780]: I1205 08:26:40.160963 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="831d476c-5e00-427c-8221-c65eb889ca3c" path="/var/lib/kubelet/pods/831d476c-5e00-427c-8221-c65eb889ca3c/volumes" Dec 05 08:26:40 crc kubenswrapper[4780]: I1205 08:26:40.162176 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b0f76e-f680-4432-acaf-3bb47c0dea49" path="/var/lib/kubelet/pods/e3b0f76e-f680-4432-acaf-3bb47c0dea49/volumes" Dec 05 08:26:41 crc kubenswrapper[4780]: I1205 08:26:41.140098 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:26:41 crc kubenswrapper[4780]: E1205 08:26:41.140714 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:26:43 crc kubenswrapper[4780]: I1205 08:26:43.640827 4780 generic.go:334] "Generic (PLEG): container finished" podID="ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5" containerID="732e0b42199d096835cb203444346ead028c8c414ea24c3d8c4a7406e5ca58d2" exitCode=0 Dec 05 08:26:43 crc kubenswrapper[4780]: I1205 08:26:43.640989 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5","Type":"ContainerDied","Data":"732e0b42199d096835cb203444346ead028c8c414ea24c3d8c4a7406e5ca58d2"} Dec 05 08:26:44 crc kubenswrapper[4780]: I1205 08:26:44.653649 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5","Type":"ContainerStarted","Data":"676661d430434b236bee67f0db5042b50e4bf099553b77847eef8de07cc47ed4"} Dec 05 08:26:47 crc kubenswrapper[4780]: I1205 08:26:47.688210 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5","Type":"ContainerStarted","Data":"91123bce8db0f57a89a53165a1049492fa3d34ecd024cdd8a4bf16c1deaa441f"} Dec 05 08:26:47 crc kubenswrapper[4780]: I1205 08:26:47.688854 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5","Type":"ContainerStarted","Data":"8d0482c2d79e40102a65371013314e9cc19b5f3fe18ed92fb76a1b91ebba543e"} Dec 05 08:26:47 crc kubenswrapper[4780]: I1205 08:26:47.729038 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.729015125 podStartE2EDuration="15.729015125s" podCreationTimestamp="2025-12-05 08:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:26:47.716210337 +0000 UTC m=+6041.785726679" watchObservedRunningTime="2025-12-05 08:26:47.729015125 +0000 UTC m=+6041.798531457" Dec 05 08:26:48 crc kubenswrapper[4780]: I1205 08:26:48.240992 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:48 crc kubenswrapper[4780]: I1205 08:26:48.241388 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:48 crc kubenswrapper[4780]: I1205 08:26:48.248200 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:48 crc kubenswrapper[4780]: I1205 08:26:48.703905 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.812363 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.816342 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.819121 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.819179 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.827108 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.930952 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.931006 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-log-httpd\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.931050 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-run-httpd\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.931069 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csw9t\" (UniqueName: \"kubernetes.io/projected/9385666f-28d4-4008-9543-b319e61b3118-kube-api-access-csw9t\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.931107 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-config-data\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.931199 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-scripts\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:51 crc kubenswrapper[4780]: I1205 08:26:51.931242 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.032932 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.033189 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-log-httpd\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.033370 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-run-httpd\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.033486 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csw9t\" (UniqueName: \"kubernetes.io/projected/9385666f-28d4-4008-9543-b319e61b3118-kube-api-access-csw9t\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.033619 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-config-data\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.033711 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-log-httpd\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.033831 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-run-httpd\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.034002 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-scripts\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.034118 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.040227 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.041331 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-scripts\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.049114 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-config-data\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.054117 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csw9t\" (UniqueName: \"kubernetes.io/projected/9385666f-28d4-4008-9543-b319e61b3118-kube-api-access-csw9t\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.054768 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.142799 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.677506 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:26:52 crc kubenswrapper[4780]: I1205 08:26:52.734408 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9385666f-28d4-4008-9543-b319e61b3118","Type":"ContainerStarted","Data":"7559d2b1fe767c7ca873ed8322b0ac213cddc14910a55a3595e979f9ab947de9"} Dec 05 08:26:54 crc kubenswrapper[4780]: I1205 08:26:54.138776 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:26:54 crc kubenswrapper[4780]: E1205 08:26:54.139406 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:26:56 crc kubenswrapper[4780]: I1205 08:26:56.769054 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9385666f-28d4-4008-9543-b319e61b3118","Type":"ContainerStarted","Data":"8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3"} Dec 05 08:26:59 crc kubenswrapper[4780]: I1205 08:26:59.798072 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9385666f-28d4-4008-9543-b319e61b3118","Type":"ContainerStarted","Data":"80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39"} Dec 05 08:26:59 crc kubenswrapper[4780]: I1205 08:26:59.798559 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9385666f-28d4-4008-9543-b319e61b3118","Type":"ContainerStarted","Data":"b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae"} Dec 05 08:27:01 crc kubenswrapper[4780]: I1205 08:27:01.818316 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9385666f-28d4-4008-9543-b319e61b3118","Type":"ContainerStarted","Data":"c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5"} Dec 05 08:27:01 crc kubenswrapper[4780]: I1205 08:27:01.818988 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 08:27:01 crc kubenswrapper[4780]: I1205 08:27:01.837758 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.627770784 podStartE2EDuration="10.837737766s" podCreationTimestamp="2025-12-05 08:26:51 +0000 UTC" firstStartedPulling="2025-12-05 08:26:52.668633737 +0000 UTC m=+6046.738150069" lastFinishedPulling="2025-12-05 08:27:00.878600719 +0000 UTC m=+6054.948117051" observedRunningTime="2025-12-05 08:27:01.836539884 +0000 UTC m=+6055.906056216" watchObservedRunningTime="2025-12-05 08:27:01.837737766 +0000 UTC m=+6055.907254098" Dec 05 08:27:04 crc kubenswrapper[4780]: I1205 08:27:04.855530 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-smbhg"] Dec 05 08:27:04 crc kubenswrapper[4780]: I1205 08:27:04.860553 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:04 crc kubenswrapper[4780]: I1205 08:27:04.866314 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smbhg"] Dec 05 08:27:04 crc kubenswrapper[4780]: I1205 08:27:04.915146 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9gf\" (UniqueName: \"kubernetes.io/projected/03d45ea5-fba2-4f54-8f10-370bb3e888d2-kube-api-access-5c9gf\") pod \"community-operators-smbhg\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:04 crc kubenswrapper[4780]: I1205 08:27:04.915251 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-catalog-content\") pod \"community-operators-smbhg\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:04 crc kubenswrapper[4780]: I1205 08:27:04.915752 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-utilities\") pod \"community-operators-smbhg\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.018260 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-catalog-content\") pod \"community-operators-smbhg\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.018392 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-utilities\") pod \"community-operators-smbhg\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.018455 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c9gf\" (UniqueName: \"kubernetes.io/projected/03d45ea5-fba2-4f54-8f10-370bb3e888d2-kube-api-access-5c9gf\") pod \"community-operators-smbhg\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.019362 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-utilities\") pod \"community-operators-smbhg\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.019588 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-catalog-content\") pod \"community-operators-smbhg\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.036282 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c9gf\" (UniqueName: \"kubernetes.io/projected/03d45ea5-fba2-4f54-8f10-370bb3e888d2-kube-api-access-5c9gf\") pod \"community-operators-smbhg\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.199407 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.608696 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smbhg"] Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.872319 4780 generic.go:334] "Generic (PLEG): container finished" podID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerID="0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e" exitCode=0 Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.872370 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smbhg" event={"ID":"03d45ea5-fba2-4f54-8f10-370bb3e888d2","Type":"ContainerDied","Data":"0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e"} Dec 05 08:27:05 crc kubenswrapper[4780]: I1205 08:27:05.872628 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smbhg" event={"ID":"03d45ea5-fba2-4f54-8f10-370bb3e888d2","Type":"ContainerStarted","Data":"9011fcaf6c014b3596fbba7e2b0c8c4c8527496931c0feb6b1d4015c09d8b776"} Dec 05 08:27:06 crc kubenswrapper[4780]: I1205 08:27:06.055348 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-j9ks4"] Dec 05 08:27:06 crc kubenswrapper[4780]: I1205 08:27:06.068593 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-j9ks4"] Dec 05 08:27:06 crc kubenswrapper[4780]: I1205 08:27:06.153941 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d153a814-ed3a-44f1-baef-9c922f0cb899" path="/var/lib/kubelet/pods/d153a814-ed3a-44f1-baef-9c922f0cb899/volumes" Dec 05 08:27:06 crc kubenswrapper[4780]: I1205 08:27:06.884686 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smbhg" event={"ID":"03d45ea5-fba2-4f54-8f10-370bb3e888d2","Type":"ContainerStarted","Data":"bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce"} Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.139908 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:27:08 crc kubenswrapper[4780]: E1205 08:27:08.140393 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.297153 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-7vfx9"] Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.298915 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-7vfx9" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.309736 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-7vfx9"] Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.399061 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-fd3b-account-create-update-2m8qv"] Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.400485 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e480000-a70a-4680-a18d-b7b79381be78-operator-scripts\") pod \"aodh-db-create-7vfx9\" (UID: \"2e480000-a70a-4680-a18d-b7b79381be78\") " pod="openstack/aodh-db-create-7vfx9" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.400787 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fd3b-account-create-update-2m8qv" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.400782 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwhdm\" (UniqueName: \"kubernetes.io/projected/2e480000-a70a-4680-a18d-b7b79381be78-kube-api-access-fwhdm\") pod \"aodh-db-create-7vfx9\" (UID: \"2e480000-a70a-4680-a18d-b7b79381be78\") " pod="openstack/aodh-db-create-7vfx9" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.403535 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.413756 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-fd3b-account-create-update-2m8qv"] Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.502666 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwhdm\" (UniqueName: \"kubernetes.io/projected/2e480000-a70a-4680-a18d-b7b79381be78-kube-api-access-fwhdm\") pod \"aodh-db-create-7vfx9\" (UID: \"2e480000-a70a-4680-a18d-b7b79381be78\") " pod="openstack/aodh-db-create-7vfx9" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.502773 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxn9\" (UniqueName: \"kubernetes.io/projected/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-kube-api-access-mzxn9\") pod \"aodh-fd3b-account-create-update-2m8qv\" (UID: \"a49dc842-7c79-4e32-9caa-0ba8079ba6d5\") " pod="openstack/aodh-fd3b-account-create-update-2m8qv" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.502806 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-operator-scripts\") pod \"aodh-fd3b-account-create-update-2m8qv\" (UID: \"a49dc842-7c79-4e32-9caa-0ba8079ba6d5\") " pod="openstack/aodh-fd3b-account-create-update-2m8qv" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.502924 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e480000-a70a-4680-a18d-b7b79381be78-operator-scripts\") pod \"aodh-db-create-7vfx9\" (UID: \"2e480000-a70a-4680-a18d-b7b79381be78\") " pod="openstack/aodh-db-create-7vfx9" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.503710 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e480000-a70a-4680-a18d-b7b79381be78-operator-scripts\") pod \"aodh-db-create-7vfx9\" (UID: \"2e480000-a70a-4680-a18d-b7b79381be78\") " pod="openstack/aodh-db-create-7vfx9" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.521497 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwhdm\" (UniqueName: \"kubernetes.io/projected/2e480000-a70a-4680-a18d-b7b79381be78-kube-api-access-fwhdm\") pod \"aodh-db-create-7vfx9\" (UID: \"2e480000-a70a-4680-a18d-b7b79381be78\") " pod="openstack/aodh-db-create-7vfx9" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.604679 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxn9\" (UniqueName: \"kubernetes.io/projected/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-kube-api-access-mzxn9\") pod \"aodh-fd3b-account-create-update-2m8qv\" (UID: \"a49dc842-7c79-4e32-9caa-0ba8079ba6d5\") " pod="openstack/aodh-fd3b-account-create-update-2m8qv" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.604724 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-operator-scripts\") pod \"aodh-fd3b-account-create-update-2m8qv\" (UID: \"a49dc842-7c79-4e32-9caa-0ba8079ba6d5\") " pod="openstack/aodh-fd3b-account-create-update-2m8qv" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.605813 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-operator-scripts\") pod \"aodh-fd3b-account-create-update-2m8qv\" (UID: \"a49dc842-7c79-4e32-9caa-0ba8079ba6d5\") " pod="openstack/aodh-fd3b-account-create-update-2m8qv" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.626801 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxn9\" (UniqueName: \"kubernetes.io/projected/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-kube-api-access-mzxn9\") pod \"aodh-fd3b-account-create-update-2m8qv\" (UID: \"a49dc842-7c79-4e32-9caa-0ba8079ba6d5\") " pod="openstack/aodh-fd3b-account-create-update-2m8qv" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.655062 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-7vfx9" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.732459 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fd3b-account-create-update-2m8qv" Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.914369 4780 generic.go:334] "Generic (PLEG): container finished" podID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerID="bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce" exitCode=0 Dec 05 08:27:08 crc kubenswrapper[4780]: I1205 08:27:08.914466 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smbhg" event={"ID":"03d45ea5-fba2-4f54-8f10-370bb3e888d2","Type":"ContainerDied","Data":"bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce"} Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.269278 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-7vfx9"] Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.436184 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rxvmv"] Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.439113 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.502537 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxvmv"] Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.531619 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpqpw\" (UniqueName: \"kubernetes.io/projected/685eea83-5a12-4fbd-97f3-f5cb858c96ab-kube-api-access-cpqpw\") pod \"redhat-marketplace-rxvmv\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.531727 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-utilities\") pod \"redhat-marketplace-rxvmv\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.535158 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-catalog-content\") pod \"redhat-marketplace-rxvmv\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.638223 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpqpw\" (UniqueName: \"kubernetes.io/projected/685eea83-5a12-4fbd-97f3-f5cb858c96ab-kube-api-access-cpqpw\") pod \"redhat-marketplace-rxvmv\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.638274 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-utilities\") pod \"redhat-marketplace-rxvmv\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.638367 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-catalog-content\") pod \"redhat-marketplace-rxvmv\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.638941 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-catalog-content\") pod \"redhat-marketplace-rxvmv\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.639541 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-utilities\") pod \"redhat-marketplace-rxvmv\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.672695 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpqpw\" (UniqueName: \"kubernetes.io/projected/685eea83-5a12-4fbd-97f3-f5cb858c96ab-kube-api-access-cpqpw\") pod \"redhat-marketplace-rxvmv\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:09 crc kubenswrapper[4780]: I1205 08:27:09.806775 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:10 crc kubenswrapper[4780]: I1205 08:27:10.092673 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smbhg" event={"ID":"03d45ea5-fba2-4f54-8f10-370bb3e888d2","Type":"ContainerStarted","Data":"fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75"} Dec 05 08:27:10 crc kubenswrapper[4780]: I1205 08:27:10.094053 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-fd3b-account-create-update-2m8qv"] Dec 05 08:27:10 crc kubenswrapper[4780]: I1205 08:27:10.095045 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-7vfx9" event={"ID":"2e480000-a70a-4680-a18d-b7b79381be78","Type":"ContainerStarted","Data":"acf2003408884b3383ccc2d8f0107dc330bbbec1230dc7bdd4e16f673a3931d4"} Dec 05 08:27:10 crc kubenswrapper[4780]: I1205 08:27:10.095076 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-7vfx9" event={"ID":"2e480000-a70a-4680-a18d-b7b79381be78","Type":"ContainerStarted","Data":"2420eed996afe40dad5d6ffdc26289db6b1daf9cf908ba5c95e5501a80403e05"} Dec 05 08:27:10 crc kubenswrapper[4780]: W1205 08:27:10.106067 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda49dc842_7c79_4e32_9caa_0ba8079ba6d5.slice/crio-bb591b69df385803ef8bc7a133b2c6e84178ae425b54e7ec576a1de40137b4df WatchSource:0}: Error finding container bb591b69df385803ef8bc7a133b2c6e84178ae425b54e7ec576a1de40137b4df: Status 404 returned error can't find the container with id bb591b69df385803ef8bc7a133b2c6e84178ae425b54e7ec576a1de40137b4df Dec 05 08:27:10 crc kubenswrapper[4780]: I1205 08:27:10.119399 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-smbhg" podStartSLOduration=2.63912826 podStartE2EDuration="6.119380268s" podCreationTimestamp="2025-12-05 08:27:04 +0000 UTC" firstStartedPulling="2025-12-05 08:27:05.874681747 +0000 UTC m=+6059.944198079" lastFinishedPulling="2025-12-05 08:27:09.354933755 +0000 UTC m=+6063.424450087" observedRunningTime="2025-12-05 08:27:10.112847281 +0000 UTC m=+6064.182363613" watchObservedRunningTime="2025-12-05 08:27:10.119380268 +0000 UTC m=+6064.188896600" Dec 05 08:27:10 crc kubenswrapper[4780]: I1205 08:27:10.151143 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-7vfx9" podStartSLOduration=2.151122981 podStartE2EDuration="2.151122981s" podCreationTimestamp="2025-12-05 08:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:27:10.132390432 +0000 UTC m=+6064.201906764" watchObservedRunningTime="2025-12-05 08:27:10.151122981 +0000 UTC m=+6064.220639303" Dec 05 08:27:10 crc kubenswrapper[4780]: I1205 08:27:10.574008 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxvmv"] Dec 05 08:27:10 crc kubenswrapper[4780]: W1205 08:27:10.579033 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod685eea83_5a12_4fbd_97f3_f5cb858c96ab.slice/crio-3e2548536340f880c27b7e9dd2836eb3064f697a9c4c6039954d0bd496015025 WatchSource:0}: Error finding container 3e2548536340f880c27b7e9dd2836eb3064f697a9c4c6039954d0bd496015025: Status 404 returned error can't find the container with id 3e2548536340f880c27b7e9dd2836eb3064f697a9c4c6039954d0bd496015025 Dec 05 08:27:11 crc kubenswrapper[4780]: I1205 08:27:11.105378 4780 generic.go:334] "Generic (PLEG): container finished" podID="a49dc842-7c79-4e32-9caa-0ba8079ba6d5" containerID="3dd30f043b29082812fe9001096f8f53694448f2be09c9042edf8f78b00bd2ed" exitCode=0 Dec 05 08:27:11 crc kubenswrapper[4780]: I1205 08:27:11.105829 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fd3b-account-create-update-2m8qv" event={"ID":"a49dc842-7c79-4e32-9caa-0ba8079ba6d5","Type":"ContainerDied","Data":"3dd30f043b29082812fe9001096f8f53694448f2be09c9042edf8f78b00bd2ed"} Dec 05 08:27:11 crc kubenswrapper[4780]: I1205 08:27:11.105936 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fd3b-account-create-update-2m8qv" event={"ID":"a49dc842-7c79-4e32-9caa-0ba8079ba6d5","Type":"ContainerStarted","Data":"bb591b69df385803ef8bc7a133b2c6e84178ae425b54e7ec576a1de40137b4df"} Dec 05 08:27:11 crc kubenswrapper[4780]: I1205 08:27:11.107258 4780 generic.go:334] "Generic (PLEG): container finished" podID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerID="6b30066452833cffe686f62cce2d9381497d838035abf6d3d9ca92cf06c85f6c" exitCode=0 Dec 05 08:27:11 crc kubenswrapper[4780]: I1205 08:27:11.107336 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxvmv" event={"ID":"685eea83-5a12-4fbd-97f3-f5cb858c96ab","Type":"ContainerDied","Data":"6b30066452833cffe686f62cce2d9381497d838035abf6d3d9ca92cf06c85f6c"} Dec 05 08:27:11 crc kubenswrapper[4780]: I1205 08:27:11.107369 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxvmv" event={"ID":"685eea83-5a12-4fbd-97f3-f5cb858c96ab","Type":"ContainerStarted","Data":"3e2548536340f880c27b7e9dd2836eb3064f697a9c4c6039954d0bd496015025"} Dec 05 08:27:11 crc kubenswrapper[4780]: I1205 08:27:11.109504 4780 generic.go:334] "Generic (PLEG): container finished" podID="2e480000-a70a-4680-a18d-b7b79381be78" containerID="acf2003408884b3383ccc2d8f0107dc330bbbec1230dc7bdd4e16f673a3931d4" exitCode=0 Dec 05 08:27:11 crc kubenswrapper[4780]: I1205 08:27:11.109555 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-7vfx9" event={"ID":"2e480000-a70a-4680-a18d-b7b79381be78","Type":"ContainerDied","Data":"acf2003408884b3383ccc2d8f0107dc330bbbec1230dc7bdd4e16f673a3931d4"} Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.121620 4780 generic.go:334] "Generic (PLEG): container finished" podID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerID="e694beaaa94f10261b7e7619f5cb02f7a5c1ea0bd0ad78952b5ad1187b9ac4ca" exitCode=0 Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.121795 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxvmv" event={"ID":"685eea83-5a12-4fbd-97f3-f5cb858c96ab","Type":"ContainerDied","Data":"e694beaaa94f10261b7e7619f5cb02f7a5c1ea0bd0ad78952b5ad1187b9ac4ca"} Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.507119 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-7vfx9" Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.674796 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e480000-a70a-4680-a18d-b7b79381be78-operator-scripts\") pod \"2e480000-a70a-4680-a18d-b7b79381be78\" (UID: \"2e480000-a70a-4680-a18d-b7b79381be78\") " Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.675284 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwhdm\" (UniqueName: \"kubernetes.io/projected/2e480000-a70a-4680-a18d-b7b79381be78-kube-api-access-fwhdm\") pod \"2e480000-a70a-4680-a18d-b7b79381be78\" (UID: \"2e480000-a70a-4680-a18d-b7b79381be78\") " Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.676055 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e480000-a70a-4680-a18d-b7b79381be78-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e480000-a70a-4680-a18d-b7b79381be78" (UID: "2e480000-a70a-4680-a18d-b7b79381be78"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.701964 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e480000-a70a-4680-a18d-b7b79381be78-kube-api-access-fwhdm" (OuterVolumeSpecName: "kube-api-access-fwhdm") pod "2e480000-a70a-4680-a18d-b7b79381be78" (UID: "2e480000-a70a-4680-a18d-b7b79381be78"). InnerVolumeSpecName "kube-api-access-fwhdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.780871 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwhdm\" (UniqueName: \"kubernetes.io/projected/2e480000-a70a-4680-a18d-b7b79381be78-kube-api-access-fwhdm\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.781145 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e480000-a70a-4680-a18d-b7b79381be78-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.787534 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fd3b-account-create-update-2m8qv" Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.882618 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzxn9\" (UniqueName: \"kubernetes.io/projected/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-kube-api-access-mzxn9\") pod \"a49dc842-7c79-4e32-9caa-0ba8079ba6d5\" (UID: \"a49dc842-7c79-4e32-9caa-0ba8079ba6d5\") " Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.882780 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-operator-scripts\") pod \"a49dc842-7c79-4e32-9caa-0ba8079ba6d5\" (UID: \"a49dc842-7c79-4e32-9caa-0ba8079ba6d5\") " Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.883261 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a49dc842-7c79-4e32-9caa-0ba8079ba6d5" (UID: "a49dc842-7c79-4e32-9caa-0ba8079ba6d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.883636 4780 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.888106 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-kube-api-access-mzxn9" (OuterVolumeSpecName: "kube-api-access-mzxn9") pod "a49dc842-7c79-4e32-9caa-0ba8079ba6d5" (UID: "a49dc842-7c79-4e32-9caa-0ba8079ba6d5"). InnerVolumeSpecName "kube-api-access-mzxn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:27:12 crc kubenswrapper[4780]: I1205 08:27:12.985331 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzxn9\" (UniqueName: \"kubernetes.io/projected/a49dc842-7c79-4e32-9caa-0ba8079ba6d5-kube-api-access-mzxn9\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:13 crc kubenswrapper[4780]: I1205 08:27:13.131951 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-7vfx9" Dec 05 08:27:13 crc kubenswrapper[4780]: I1205 08:27:13.132163 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-7vfx9" event={"ID":"2e480000-a70a-4680-a18d-b7b79381be78","Type":"ContainerDied","Data":"2420eed996afe40dad5d6ffdc26289db6b1daf9cf908ba5c95e5501a80403e05"} Dec 05 08:27:13 crc kubenswrapper[4780]: I1205 08:27:13.132666 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2420eed996afe40dad5d6ffdc26289db6b1daf9cf908ba5c95e5501a80403e05" Dec 05 08:27:13 crc kubenswrapper[4780]: I1205 08:27:13.133594 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-fd3b-account-create-update-2m8qv" event={"ID":"a49dc842-7c79-4e32-9caa-0ba8079ba6d5","Type":"ContainerDied","Data":"bb591b69df385803ef8bc7a133b2c6e84178ae425b54e7ec576a1de40137b4df"} Dec 05 08:27:13 crc kubenswrapper[4780]: I1205 08:27:13.133617 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb591b69df385803ef8bc7a133b2c6e84178ae425b54e7ec576a1de40137b4df" Dec 05 08:27:13 crc kubenswrapper[4780]: I1205 08:27:13.133659 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-fd3b-account-create-update-2m8qv" Dec 05 08:27:14 crc kubenswrapper[4780]: I1205 08:27:14.151168 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxvmv" event={"ID":"685eea83-5a12-4fbd-97f3-f5cb858c96ab","Type":"ContainerStarted","Data":"522bca8a14eee1575b4d646dc1cf2278f694130874eecd0b100a6689633fb50d"} Dec 05 08:27:14 crc kubenswrapper[4780]: I1205 08:27:14.181711 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rxvmv" podStartSLOduration=3.389436562 podStartE2EDuration="5.181694679s" podCreationTimestamp="2025-12-05 08:27:09 +0000 UTC" firstStartedPulling="2025-12-05 08:27:11.109310203 +0000 UTC m=+6065.178826535" lastFinishedPulling="2025-12-05 08:27:12.90156833 +0000 UTC m=+6066.971084652" observedRunningTime="2025-12-05 08:27:14.166084293 +0000 UTC m=+6068.235600645" watchObservedRunningTime="2025-12-05 08:27:14.181694679 +0000 UTC m=+6068.251211011" Dec 05 08:27:15 crc kubenswrapper[4780]: I1205 08:27:15.200111 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:15 crc kubenswrapper[4780]: I1205 08:27:15.200443 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:15 crc kubenswrapper[4780]: I1205 08:27:15.253070 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:16 crc kubenswrapper[4780]: I1205 08:27:16.218286 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:17 crc kubenswrapper[4780]: I1205 08:27:17.414557 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smbhg"] Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.187928 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-smbhg" podUID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerName="registry-server" containerID="cri-o://fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75" gracePeriod=2 Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.713776 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.792599 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-5ck2d"] Dec 05 08:27:18 crc kubenswrapper[4780]: E1205 08:27:18.793207 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerName="registry-server" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.793234 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerName="registry-server" Dec 05 08:27:18 crc kubenswrapper[4780]: E1205 08:27:18.793263 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerName="extract-content" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.793274 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerName="extract-content" Dec 05 08:27:18 crc kubenswrapper[4780]: E1205 08:27:18.793297 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e480000-a70a-4680-a18d-b7b79381be78" containerName="mariadb-database-create" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.793306 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e480000-a70a-4680-a18d-b7b79381be78" containerName="mariadb-database-create" Dec 05 08:27:18 crc kubenswrapper[4780]: E1205 08:27:18.793324 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49dc842-7c79-4e32-9caa-0ba8079ba6d5" containerName="mariadb-account-create-update" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.793334 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49dc842-7c79-4e32-9caa-0ba8079ba6d5" containerName="mariadb-account-create-update" Dec 05 08:27:18 crc kubenswrapper[4780]: E1205 08:27:18.793352 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerName="extract-utilities" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.793358 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerName="extract-utilities" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.793541 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e480000-a70a-4680-a18d-b7b79381be78" containerName="mariadb-database-create" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.793564 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerName="registry-server" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.793578 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49dc842-7c79-4e32-9caa-0ba8079ba6d5" containerName="mariadb-account-create-update" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.794676 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.797071 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-gc8bn" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.797340 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.797632 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.797663 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.803259 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-5ck2d"] Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.902688 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-catalog-content\") pod \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.902800 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c9gf\" (UniqueName: \"kubernetes.io/projected/03d45ea5-fba2-4f54-8f10-370bb3e888d2-kube-api-access-5c9gf\") pod \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.902852 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-utilities\") pod \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\" (UID: \"03d45ea5-fba2-4f54-8f10-370bb3e888d2\") " Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.903502 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-scripts\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.903623 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-combined-ca-bundle\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.903674 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-config-data\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.903792 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbgzb\" (UniqueName: \"kubernetes.io/projected/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-kube-api-access-dbgzb\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.904086 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-utilities" (OuterVolumeSpecName: "utilities") pod "03d45ea5-fba2-4f54-8f10-370bb3e888d2" (UID: "03d45ea5-fba2-4f54-8f10-370bb3e888d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.917012 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d45ea5-fba2-4f54-8f10-370bb3e888d2-kube-api-access-5c9gf" (OuterVolumeSpecName: "kube-api-access-5c9gf") pod "03d45ea5-fba2-4f54-8f10-370bb3e888d2" (UID: "03d45ea5-fba2-4f54-8f10-370bb3e888d2"). InnerVolumeSpecName "kube-api-access-5c9gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:27:18 crc kubenswrapper[4780]: I1205 08:27:18.955609 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03d45ea5-fba2-4f54-8f10-370bb3e888d2" (UID: "03d45ea5-fba2-4f54-8f10-370bb3e888d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.006172 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-combined-ca-bundle\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.006923 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-config-data\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.007031 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbgzb\" (UniqueName: \"kubernetes.io/projected/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-kube-api-access-dbgzb\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.007176 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-scripts\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.007245 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c9gf\" (UniqueName: \"kubernetes.io/projected/03d45ea5-fba2-4f54-8f10-370bb3e888d2-kube-api-access-5c9gf\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.007263 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.007275 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03d45ea5-fba2-4f54-8f10-370bb3e888d2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.010534 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-combined-ca-bundle\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.011265 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-scripts\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.012713 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-config-data\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.033141 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbgzb\" (UniqueName: \"kubernetes.io/projected/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-kube-api-access-dbgzb\") pod \"aodh-db-sync-5ck2d\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.122010 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.201166 4780 generic.go:334] "Generic (PLEG): container finished" podID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" containerID="fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75" exitCode=0 Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.201232 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smbhg" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.201218 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smbhg" event={"ID":"03d45ea5-fba2-4f54-8f10-370bb3e888d2","Type":"ContainerDied","Data":"fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75"} Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.201396 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smbhg" event={"ID":"03d45ea5-fba2-4f54-8f10-370bb3e888d2","Type":"ContainerDied","Data":"9011fcaf6c014b3596fbba7e2b0c8c4c8527496931c0feb6b1d4015c09d8b776"} Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.201426 4780 scope.go:117] "RemoveContainer" containerID="fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.229163 4780 scope.go:117] "RemoveContainer" containerID="bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.280532 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smbhg"] Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.294561 4780 scope.go:117] "RemoveContainer" containerID="0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.309129 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-smbhg"] Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.375919 4780 scope.go:117] "RemoveContainer" containerID="fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75" Dec 05 08:27:19 crc kubenswrapper[4780]: E1205 08:27:19.376463 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75\": container with ID starting with fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75 not found: ID does not exist" containerID="fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.376517 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75"} err="failed to get container status \"fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75\": rpc error: code = NotFound desc = could not find container \"fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75\": container with ID starting with fe7b3ba187fe982f8bb600fd7c593e6cc2f4c69dde91b75fd4815b56781ddb75 not found: ID does not exist" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.376550 4780 scope.go:117] "RemoveContainer" containerID="bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce" Dec 05 08:27:19 crc kubenswrapper[4780]: E1205 08:27:19.376832 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce\": container with ID starting with bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce not found: ID does not exist" containerID="bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.376870 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce"} err="failed to get container status \"bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce\": rpc error: code = NotFound desc = could not find container \"bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce\": container with ID starting with bf374ad69da083fa251ac2c2412a31840ae44f696dee867bfed95acf6a0248ce not found: ID does not exist" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.376962 4780 scope.go:117] "RemoveContainer" containerID="0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e" Dec 05 08:27:19 crc kubenswrapper[4780]: E1205 08:27:19.377282 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e\": container with ID starting with 0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e not found: ID does not exist" containerID="0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.377313 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e"} err="failed to get container status \"0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e\": rpc error: code = NotFound desc = could not find container \"0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e\": container with ID starting with 0229d6dd9955e8e4dc6ca69a209f0d22ccdbefb75b6d8c6860c7e7970de1019e not found: ID does not exist" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.614900 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-5ck2d"] Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.619344 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.808973 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.809020 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:19 crc kubenswrapper[4780]: I1205 08:27:19.868653 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:20 crc kubenswrapper[4780]: I1205 08:27:20.139032 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:27:20 crc kubenswrapper[4780]: E1205 08:27:20.139339 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:27:20 crc kubenswrapper[4780]: I1205 08:27:20.151232 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d45ea5-fba2-4f54-8f10-370bb3e888d2" path="/var/lib/kubelet/pods/03d45ea5-fba2-4f54-8f10-370bb3e888d2/volumes" Dec 05 08:27:20 crc kubenswrapper[4780]: I1205 08:27:20.214180 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5ck2d" event={"ID":"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a","Type":"ContainerStarted","Data":"44f35dd309c2f7629a859f21e7ffa1c3a8a72a1227a0c7348ccda805053b8490"} Dec 05 08:27:20 crc kubenswrapper[4780]: I1205 08:27:20.273821 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:21 crc kubenswrapper[4780]: I1205 08:27:21.813425 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxvmv"] Dec 05 08:27:22 crc kubenswrapper[4780]: I1205 08:27:22.155971 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 08:27:22 crc kubenswrapper[4780]: I1205 08:27:22.234994 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rxvmv" podUID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerName="registry-server" containerID="cri-o://522bca8a14eee1575b4d646dc1cf2278f694130874eecd0b100a6689633fb50d" gracePeriod=2 Dec 05 08:27:23 crc kubenswrapper[4780]: I1205 08:27:23.266985 4780 generic.go:334] "Generic (PLEG): container finished" podID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerID="522bca8a14eee1575b4d646dc1cf2278f694130874eecd0b100a6689633fb50d" exitCode=0 Dec 05 08:27:23 crc kubenswrapper[4780]: I1205 08:27:23.267041 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxvmv" event={"ID":"685eea83-5a12-4fbd-97f3-f5cb858c96ab","Type":"ContainerDied","Data":"522bca8a14eee1575b4d646dc1cf2278f694130874eecd0b100a6689633fb50d"} Dec 05 08:27:24 crc kubenswrapper[4780]: I1205 08:27:24.520343 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:24 crc kubenswrapper[4780]: I1205 08:27:24.549553 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-catalog-content\") pod \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " Dec 05 08:27:24 crc kubenswrapper[4780]: I1205 08:27:24.549743 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-utilities\") pod \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " Dec 05 08:27:24 crc kubenswrapper[4780]: I1205 08:27:24.550688 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-utilities" (OuterVolumeSpecName: "utilities") pod "685eea83-5a12-4fbd-97f3-f5cb858c96ab" (UID: "685eea83-5a12-4fbd-97f3-f5cb858c96ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:27:24 crc kubenswrapper[4780]: I1205 08:27:24.550798 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpqpw\" (UniqueName: \"kubernetes.io/projected/685eea83-5a12-4fbd-97f3-f5cb858c96ab-kube-api-access-cpqpw\") pod \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\" (UID: \"685eea83-5a12-4fbd-97f3-f5cb858c96ab\") " Dec 05 08:27:24 crc kubenswrapper[4780]: I1205 08:27:24.552831 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:24 crc kubenswrapper[4780]: I1205 08:27:24.555567 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685eea83-5a12-4fbd-97f3-f5cb858c96ab-kube-api-access-cpqpw" (OuterVolumeSpecName: "kube-api-access-cpqpw") pod "685eea83-5a12-4fbd-97f3-f5cb858c96ab" (UID: "685eea83-5a12-4fbd-97f3-f5cb858c96ab"). InnerVolumeSpecName "kube-api-access-cpqpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:27:24 crc kubenswrapper[4780]: I1205 08:27:24.567189 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "685eea83-5a12-4fbd-97f3-f5cb858c96ab" (UID: "685eea83-5a12-4fbd-97f3-f5cb858c96ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:27:24 crc kubenswrapper[4780]: I1205 08:27:24.655927 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685eea83-5a12-4fbd-97f3-f5cb858c96ab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:24 crc kubenswrapper[4780]: I1205 08:27:24.655958 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpqpw\" (UniqueName: \"kubernetes.io/projected/685eea83-5a12-4fbd-97f3-f5cb858c96ab-kube-api-access-cpqpw\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.295490 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5ck2d" event={"ID":"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a","Type":"ContainerStarted","Data":"bff043084d2c36c6263603443da68b5664584a8fa1411de8aa66360852ddf92a"} Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.298583 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rxvmv" event={"ID":"685eea83-5a12-4fbd-97f3-f5cb858c96ab","Type":"ContainerDied","Data":"3e2548536340f880c27b7e9dd2836eb3064f697a9c4c6039954d0bd496015025"} Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.298641 4780 scope.go:117] "RemoveContainer" containerID="522bca8a14eee1575b4d646dc1cf2278f694130874eecd0b100a6689633fb50d" Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.298963 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rxvmv" Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.321551 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-5ck2d" podStartSLOduration=2.730882679 podStartE2EDuration="7.32153031s" podCreationTimestamp="2025-12-05 08:27:18 +0000 UTC" firstStartedPulling="2025-12-05 08:27:19.61909947 +0000 UTC m=+6073.688615792" lastFinishedPulling="2025-12-05 08:27:24.209747091 +0000 UTC m=+6078.279263423" observedRunningTime="2025-12-05 08:27:25.312942156 +0000 UTC m=+6079.382458498" watchObservedRunningTime="2025-12-05 08:27:25.32153031 +0000 UTC m=+6079.391046642" Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.331555 4780 scope.go:117] "RemoveContainer" containerID="e694beaaa94f10261b7e7619f5cb02f7a5c1ea0bd0ad78952b5ad1187b9ac4ca" Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.344549 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxvmv"] Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.353003 4780 scope.go:117] "RemoveContainer" containerID="6b30066452833cffe686f62cce2d9381497d838035abf6d3d9ca92cf06c85f6c" Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.355705 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rxvmv"] Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.983852 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 08:27:25 crc kubenswrapper[4780]: I1205 08:27:25.984116 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a8bd5357-473e-47a9-baeb-38c14f8a7570" containerName="kube-state-metrics" containerID="cri-o://94f4e85529bac103a463abb40844cf471cfbb9f555c6b2a7eceba2419966f41f" gracePeriod=30 Dec 05 08:27:26 crc kubenswrapper[4780]: I1205 08:27:26.156253 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" path="/var/lib/kubelet/pods/685eea83-5a12-4fbd-97f3-f5cb858c96ab/volumes" Dec 05 08:27:26 crc kubenswrapper[4780]: E1205 08:27:26.294807 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bd5357_473e_47a9_baeb_38c14f8a7570.slice/crio-conmon-94f4e85529bac103a463abb40844cf471cfbb9f555c6b2a7eceba2419966f41f.scope\": RecentStats: unable to find data in memory cache]" Dec 05 08:27:26 crc kubenswrapper[4780]: I1205 08:27:26.342323 4780 generic.go:334] "Generic (PLEG): container finished" podID="a8bd5357-473e-47a9-baeb-38c14f8a7570" containerID="94f4e85529bac103a463abb40844cf471cfbb9f555c6b2a7eceba2419966f41f" exitCode=2 Dec 05 08:27:26 crc kubenswrapper[4780]: I1205 08:27:26.342474 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8bd5357-473e-47a9-baeb-38c14f8a7570","Type":"ContainerDied","Data":"94f4e85529bac103a463abb40844cf471cfbb9f555c6b2a7eceba2419966f41f"} Dec 05 08:27:26 crc kubenswrapper[4780]: I1205 08:27:26.500964 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 08:27:26 crc kubenswrapper[4780]: I1205 08:27:26.608336 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfqkz\" (UniqueName: \"kubernetes.io/projected/a8bd5357-473e-47a9-baeb-38c14f8a7570-kube-api-access-cfqkz\") pod \"a8bd5357-473e-47a9-baeb-38c14f8a7570\" (UID: \"a8bd5357-473e-47a9-baeb-38c14f8a7570\") " Dec 05 08:27:26 crc kubenswrapper[4780]: I1205 08:27:26.632480 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bd5357-473e-47a9-baeb-38c14f8a7570-kube-api-access-cfqkz" (OuterVolumeSpecName: "kube-api-access-cfqkz") pod "a8bd5357-473e-47a9-baeb-38c14f8a7570" (UID: "a8bd5357-473e-47a9-baeb-38c14f8a7570"). InnerVolumeSpecName "kube-api-access-cfqkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:27:26 crc kubenswrapper[4780]: I1205 08:27:26.711861 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfqkz\" (UniqueName: \"kubernetes.io/projected/a8bd5357-473e-47a9-baeb-38c14f8a7570-kube-api-access-cfqkz\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.353464 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8bd5357-473e-47a9-baeb-38c14f8a7570","Type":"ContainerDied","Data":"a268da03f650a5abb3fa6616c220d72b8fc29f3982b7f2c9bf19bc4c32df07b9"} Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.353518 4780 scope.go:117] "RemoveContainer" containerID="94f4e85529bac103a463abb40844cf471cfbb9f555c6b2a7eceba2419966f41f" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.353533 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.389189 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.406518 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.416695 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 08:27:27 crc kubenswrapper[4780]: E1205 08:27:27.417245 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerName="extract-content" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.417265 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerName="extract-content" Dec 05 08:27:27 crc kubenswrapper[4780]: E1205 08:27:27.417287 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bd5357-473e-47a9-baeb-38c14f8a7570" containerName="kube-state-metrics" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.417294 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bd5357-473e-47a9-baeb-38c14f8a7570" containerName="kube-state-metrics" Dec 05 08:27:27 crc kubenswrapper[4780]: E1205 08:27:27.417321 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerName="extract-utilities" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.417329 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerName="extract-utilities" Dec 05 08:27:27 crc kubenswrapper[4780]: E1205 08:27:27.417342 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerName="registry-server" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.417348 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerName="registry-server" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.417531 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bd5357-473e-47a9-baeb-38c14f8a7570" containerName="kube-state-metrics" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.417552 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="685eea83-5a12-4fbd-97f3-f5cb858c96ab" containerName="registry-server" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.418280 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.420147 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.420225 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.430586 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.529806 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.529989 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.530068 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.530176 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j7m5\" (UniqueName: \"kubernetes.io/projected/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-kube-api-access-2j7m5\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.632261 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.633038 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.633728 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.634121 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j7m5\" (UniqueName: \"kubernetes.io/projected/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-kube-api-access-2j7m5\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.636699 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.636853 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.638559 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.654253 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j7m5\" (UniqueName: \"kubernetes.io/projected/fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d-kube-api-access-2j7m5\") pod \"kube-state-metrics-0\" (UID: \"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d\") " pod="openstack/kube-state-metrics-0" Dec 05 08:27:27 crc kubenswrapper[4780]: I1205 08:27:27.754783 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 08:27:28 crc kubenswrapper[4780]: I1205 08:27:28.152922 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bd5357-473e-47a9-baeb-38c14f8a7570" path="/var/lib/kubelet/pods/a8bd5357-473e-47a9-baeb-38c14f8a7570/volumes" Dec 05 08:27:28 crc kubenswrapper[4780]: I1205 08:27:28.280437 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 08:27:28 crc kubenswrapper[4780]: I1205 08:27:28.309916 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:28 crc kubenswrapper[4780]: I1205 08:27:28.310198 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="ceilometer-central-agent" containerID="cri-o://8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3" gracePeriod=30 Dec 05 08:27:28 crc kubenswrapper[4780]: I1205 08:27:28.310307 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="sg-core" containerID="cri-o://80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39" gracePeriod=30 Dec 05 08:27:28 crc kubenswrapper[4780]: I1205 08:27:28.310266 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="proxy-httpd" containerID="cri-o://c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5" gracePeriod=30 Dec 05 08:27:28 crc kubenswrapper[4780]: I1205 08:27:28.310382 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="ceilometer-notification-agent" containerID="cri-o://b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae" gracePeriod=30 Dec 05 08:27:28 crc kubenswrapper[4780]: I1205 08:27:28.364529 4780 generic.go:334] "Generic (PLEG): container finished" podID="0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a" containerID="bff043084d2c36c6263603443da68b5664584a8fa1411de8aa66360852ddf92a" exitCode=0 Dec 05 08:27:28 crc kubenswrapper[4780]: I1205 08:27:28.364591 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5ck2d" event={"ID":"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a","Type":"ContainerDied","Data":"bff043084d2c36c6263603443da68b5664584a8fa1411de8aa66360852ddf92a"} Dec 05 08:27:28 crc kubenswrapper[4780]: I1205 08:27:28.366383 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d","Type":"ContainerStarted","Data":"c23b8da798aa3eda2e71661ad423b8dcd44e89903082d128ee47274e78cdefd7"} Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.379989 4780 generic.go:334] "Generic (PLEG): container finished" podID="9385666f-28d4-4008-9543-b319e61b3118" containerID="c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5" exitCode=0 Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.380312 4780 generic.go:334] "Generic (PLEG): container finished" podID="9385666f-28d4-4008-9543-b319e61b3118" containerID="80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39" exitCode=2 Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.380324 4780 generic.go:334] "Generic (PLEG): container finished" podID="9385666f-28d4-4008-9543-b319e61b3118" containerID="8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3" exitCode=0 Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.380049 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9385666f-28d4-4008-9543-b319e61b3118","Type":"ContainerDied","Data":"c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5"} Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.380386 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9385666f-28d4-4008-9543-b319e61b3118","Type":"ContainerDied","Data":"80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39"} Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.380400 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9385666f-28d4-4008-9543-b319e61b3118","Type":"ContainerDied","Data":"8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3"} Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.382383 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d","Type":"ContainerStarted","Data":"be7e7ba8b340cbeb49702bcf7d10eb1fabf468ca917258fcfbae24673091c2c0"} Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.404927 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.947945043 podStartE2EDuration="2.404906083s" podCreationTimestamp="2025-12-05 08:27:27 +0000 UTC" firstStartedPulling="2025-12-05 08:27:28.278015982 +0000 UTC m=+6082.347532314" lastFinishedPulling="2025-12-05 08:27:28.734977022 +0000 UTC m=+6082.804493354" observedRunningTime="2025-12-05 08:27:29.39745558 +0000 UTC m=+6083.466971952" watchObservedRunningTime="2025-12-05 08:27:29.404906083 +0000 UTC m=+6083.474422405" Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.732517 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.787101 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-scripts\") pod \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.787379 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbgzb\" (UniqueName: \"kubernetes.io/projected/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-kube-api-access-dbgzb\") pod \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.787413 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-config-data\") pod \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.787557 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-combined-ca-bundle\") pod \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\" (UID: \"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a\") " Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.809262 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-kube-api-access-dbgzb" (OuterVolumeSpecName: "kube-api-access-dbgzb") pod "0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a" (UID: "0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a"). InnerVolumeSpecName "kube-api-access-dbgzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.813078 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-scripts" (OuterVolumeSpecName: "scripts") pod "0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a" (UID: "0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.826028 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-config-data" (OuterVolumeSpecName: "config-data") pod "0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a" (UID: "0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.835516 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a" (UID: "0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.890456 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbgzb\" (UniqueName: \"kubernetes.io/projected/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-kube-api-access-dbgzb\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.890769 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.890782 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:29 crc kubenswrapper[4780]: I1205 08:27:29.890791 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.392931 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5ck2d" Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.392930 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5ck2d" event={"ID":"0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a","Type":"ContainerDied","Data":"44f35dd309c2f7629a859f21e7ffa1c3a8a72a1227a0c7348ccda805053b8490"} Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.392987 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f35dd309c2f7629a859f21e7ffa1c3a8a72a1227a0c7348ccda805053b8490" Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.393061 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.588494 4780 scope.go:117] "RemoveContainer" containerID="8ab0522b7cd25c3bb24eee261ef85f583e3baf024d17c0a7153bd90683b2545f" Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.616040 4780 scope.go:117] "RemoveContainer" containerID="fda019b011078bab8834b929dced50472039854968088816e4bf09a075c4a0c4" Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.646986 4780 scope.go:117] "RemoveContainer" containerID="1ea666c5e9a65ee5ae5ab44321ef84bb0f5e06b19c041a49b6a44c86f8193dfd" Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.691497 4780 scope.go:117] "RemoveContainer" containerID="d930fc3f58e1d4756fd2e44548f493a5f83e01dfac69ed8690360afdf73d00f0" Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.735627 4780 scope.go:117] "RemoveContainer" containerID="6e0f71ff785779a3996787755a29f3a9cf3d0c21cb0080744469c298e2e3fb05" Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.755922 4780 scope.go:117] "RemoveContainer" containerID="e3f8b439f935b50eab6e3ca0e82f447b3c65a1d0058bddba2d63c1eb62a87e11" Dec 05 08:27:30 crc kubenswrapper[4780]: I1205 08:27:30.776831 4780 scope.go:117] "RemoveContainer" containerID="af61b0ce60dfb6455de26a5dfecf64061d2e5259b52168d4c49063ba6317fbf0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.139542 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:27:32 crc kubenswrapper[4780]: E1205 08:27:32.140218 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.274901 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.355440 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-config-data\") pod \"9385666f-28d4-4008-9543-b319e61b3118\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.355837 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-log-httpd\") pod \"9385666f-28d4-4008-9543-b319e61b3118\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.355899 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-run-httpd\") pod \"9385666f-28d4-4008-9543-b319e61b3118\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.355965 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-sg-core-conf-yaml\") pod \"9385666f-28d4-4008-9543-b319e61b3118\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.356007 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csw9t\" (UniqueName: \"kubernetes.io/projected/9385666f-28d4-4008-9543-b319e61b3118-kube-api-access-csw9t\") pod \"9385666f-28d4-4008-9543-b319e61b3118\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.356381 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-scripts\") pod \"9385666f-28d4-4008-9543-b319e61b3118\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.356498 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-combined-ca-bundle\") pod \"9385666f-28d4-4008-9543-b319e61b3118\" (UID: \"9385666f-28d4-4008-9543-b319e61b3118\") " Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.356680 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9385666f-28d4-4008-9543-b319e61b3118" (UID: "9385666f-28d4-4008-9543-b319e61b3118"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.356701 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9385666f-28d4-4008-9543-b319e61b3118" (UID: "9385666f-28d4-4008-9543-b319e61b3118"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.357129 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.357152 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9385666f-28d4-4008-9543-b319e61b3118-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.368979 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9385666f-28d4-4008-9543-b319e61b3118-kube-api-access-csw9t" (OuterVolumeSpecName: "kube-api-access-csw9t") pod "9385666f-28d4-4008-9543-b319e61b3118" (UID: "9385666f-28d4-4008-9543-b319e61b3118"). InnerVolumeSpecName "kube-api-access-csw9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.369260 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-scripts" (OuterVolumeSpecName: "scripts") pod "9385666f-28d4-4008-9543-b319e61b3118" (UID: "9385666f-28d4-4008-9543-b319e61b3118"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.410476 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9385666f-28d4-4008-9543-b319e61b3118" (UID: "9385666f-28d4-4008-9543-b319e61b3118"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.415282 4780 generic.go:334] "Generic (PLEG): container finished" podID="9385666f-28d4-4008-9543-b319e61b3118" containerID="b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae" exitCode=0 Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.415325 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9385666f-28d4-4008-9543-b319e61b3118","Type":"ContainerDied","Data":"b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae"} Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.415348 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9385666f-28d4-4008-9543-b319e61b3118","Type":"ContainerDied","Data":"7559d2b1fe767c7ca873ed8322b0ac213cddc14910a55a3595e979f9ab947de9"} Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.415373 4780 scope.go:117] "RemoveContainer" containerID="c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.415432 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.453100 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9385666f-28d4-4008-9543-b319e61b3118" (UID: "9385666f-28d4-4008-9543-b319e61b3118"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.458856 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.458911 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.458922 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.458930 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csw9t\" (UniqueName: \"kubernetes.io/projected/9385666f-28d4-4008-9543-b319e61b3118-kube-api-access-csw9t\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.470412 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-config-data" (OuterVolumeSpecName: "config-data") pod "9385666f-28d4-4008-9543-b319e61b3118" (UID: "9385666f-28d4-4008-9543-b319e61b3118"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.498401 4780 scope.go:117] "RemoveContainer" containerID="80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.519495 4780 scope.go:117] "RemoveContainer" containerID="b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.539747 4780 scope.go:117] "RemoveContainer" containerID="8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.562800 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9385666f-28d4-4008-9543-b319e61b3118-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.565184 4780 scope.go:117] "RemoveContainer" containerID="c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5" Dec 05 08:27:32 crc kubenswrapper[4780]: E1205 08:27:32.565737 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5\": container with ID starting with c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5 not found: ID does not exist" containerID="c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.565773 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5"} err="failed to get container status \"c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5\": rpc error: code = NotFound desc = could not find container \"c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5\": container with ID starting with c5aba57e482e0f39f17c9c459eb2f347f3b43dbb005d1634a31a8c63164430b5 not found: ID does not exist" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.565796 4780 scope.go:117] "RemoveContainer" containerID="80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39" Dec 05 08:27:32 crc kubenswrapper[4780]: E1205 08:27:32.566251 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39\": container with ID starting with 80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39 not found: ID does not exist" containerID="80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.566292 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39"} err="failed to get container status \"80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39\": rpc error: code = NotFound desc = could not find container \"80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39\": container with ID starting with 80f107afdeb77fa694ec80d3289f821fb2c652318c63fbd7d9c8ca745fb52d39 not found: ID does not exist" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.566324 4780 scope.go:117] "RemoveContainer" containerID="b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae" Dec 05 08:27:32 crc kubenswrapper[4780]: E1205 08:27:32.566979 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae\": container with ID starting with b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae not found: ID does not exist" containerID="b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.567061 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae"} err="failed to get container status \"b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae\": rpc error: code = NotFound desc = could not find container \"b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae\": container with ID starting with b3e5c60cb945850ffd2d1e1d9ac7e550b7ca7c3cf3f17437b74decc81989abae not found: ID does not exist" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.567117 4780 scope.go:117] "RemoveContainer" containerID="8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3" Dec 05 08:27:32 crc kubenswrapper[4780]: E1205 08:27:32.567732 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3\": container with ID starting with 8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3 not found: ID does not exist" containerID="8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.567763 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3"} err="failed to get container status \"8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3\": rpc error: code = NotFound desc = could not find container \"8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3\": container with ID starting with 8f6f59a5e8c94e4e540995afc8180cd51c8b12ffe376fcd39758ca6bae83baa3 not found: ID does not exist" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.752480 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.763179 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.785547 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:32 crc kubenswrapper[4780]: E1205 08:27:32.785996 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a" containerName="aodh-db-sync" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.786014 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a" containerName="aodh-db-sync" Dec 05 08:27:32 crc kubenswrapper[4780]: E1205 08:27:32.786030 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="sg-core" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.786036 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="sg-core" Dec 05 08:27:32 crc kubenswrapper[4780]: E1205 08:27:32.786047 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="ceilometer-notification-agent" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.786053 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="ceilometer-notification-agent" Dec 05 08:27:32 crc kubenswrapper[4780]: E1205 08:27:32.786073 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="proxy-httpd" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.786080 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="proxy-httpd" Dec 05 08:27:32 crc kubenswrapper[4780]: E1205 08:27:32.786105 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="ceilometer-central-agent" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.786110 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="ceilometer-central-agent" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.786290 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="sg-core" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.786310 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="proxy-httpd" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.786327 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="ceilometer-notification-agent" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.786339 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a" containerName="aodh-db-sync" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.786351 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9385666f-28d4-4008-9543-b319e61b3118" containerName="ceilometer-central-agent" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.788112 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.790919 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.791085 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.791250 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.796119 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.871426 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.871503 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-config-data\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.871543 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-scripts\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.871569 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-log-httpd\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.871646 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c44p5\" (UniqueName: \"kubernetes.io/projected/b7e6c9d7-2431-442d-82c9-aab59434cda1-kube-api-access-c44p5\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.871673 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.871709 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-run-httpd\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.871741 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.973773 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.973857 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-config-data\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.973932 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-scripts\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.973958 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-log-httpd\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.974003 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c44p5\" (UniqueName: \"kubernetes.io/projected/b7e6c9d7-2431-442d-82c9-aab59434cda1-kube-api-access-c44p5\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.974025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.974061 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-run-httpd\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.974097 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.975783 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-log-httpd\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.975808 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-run-httpd\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.978442 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.978647 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-config-data\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.979070 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.979706 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.980390 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-scripts\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:32 crc kubenswrapper[4780]: I1205 08:27:32.997989 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c44p5\" (UniqueName: \"kubernetes.io/projected/b7e6c9d7-2431-442d-82c9-aab59434cda1-kube-api-access-c44p5\") pod \"ceilometer-0\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " pod="openstack/ceilometer-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.107899 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.348799 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.351847 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.357257 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.357332 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-gc8bn" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.357553 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.364191 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.486800 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8c5w\" (UniqueName: \"kubernetes.io/projected/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-kube-api-access-t8c5w\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.487224 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-scripts\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.487255 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-config-data\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.487338 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-combined-ca-bundle\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.561376 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.589909 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-combined-ca-bundle\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.590053 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8c5w\" (UniqueName: \"kubernetes.io/projected/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-kube-api-access-t8c5w\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.590104 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-scripts\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.590129 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-config-data\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.596037 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-combined-ca-bundle\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.596383 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-scripts\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.596869 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-config-data\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.607054 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8c5w\" (UniqueName: \"kubernetes.io/projected/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-kube-api-access-t8c5w\") pod \"aodh-0\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " pod="openstack/aodh-0" Dec 05 08:27:33 crc kubenswrapper[4780]: I1205 08:27:33.695777 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 05 08:27:34 crc kubenswrapper[4780]: I1205 08:27:34.175035 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9385666f-28d4-4008-9543-b319e61b3118" path="/var/lib/kubelet/pods/9385666f-28d4-4008-9543-b319e61b3118/volumes" Dec 05 08:27:34 crc kubenswrapper[4780]: I1205 08:27:34.275144 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 05 08:27:34 crc kubenswrapper[4780]: W1205 08:27:34.281693 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34d0b9a1_9b51_48a0_8e1e_bf7cdf39dbbd.slice/crio-4ef4780b757fb1b5f17eb762277b4759b467c927d7e28fd1384e7d7c3c2fd3df WatchSource:0}: Error finding container 4ef4780b757fb1b5f17eb762277b4759b467c927d7e28fd1384e7d7c3c2fd3df: Status 404 returned error can't find the container with id 4ef4780b757fb1b5f17eb762277b4759b467c927d7e28fd1384e7d7c3c2fd3df Dec 05 08:27:34 crc kubenswrapper[4780]: I1205 08:27:34.443389 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7e6c9d7-2431-442d-82c9-aab59434cda1","Type":"ContainerStarted","Data":"6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b"} Dec 05 08:27:34 crc kubenswrapper[4780]: I1205 08:27:34.443835 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7e6c9d7-2431-442d-82c9-aab59434cda1","Type":"ContainerStarted","Data":"d919b64a1a71176df0adfdebc7bb0907c980419f58d1ddf756b91ad262e7c790"} Dec 05 08:27:34 crc kubenswrapper[4780]: I1205 08:27:34.445706 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd","Type":"ContainerStarted","Data":"4ef4780b757fb1b5f17eb762277b4759b467c927d7e28fd1384e7d7c3c2fd3df"} Dec 05 08:27:35 crc kubenswrapper[4780]: I1205 08:27:35.458369 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7e6c9d7-2431-442d-82c9-aab59434cda1","Type":"ContainerStarted","Data":"0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89"} Dec 05 08:27:35 crc kubenswrapper[4780]: I1205 08:27:35.458690 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7e6c9d7-2431-442d-82c9-aab59434cda1","Type":"ContainerStarted","Data":"eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b"} Dec 05 08:27:35 crc kubenswrapper[4780]: I1205 08:27:35.461139 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd","Type":"ContainerStarted","Data":"b30625132c1805b8c864cba2d9b642890a7c5b1c9f0c937f414ac3b7b10c753d"} Dec 05 08:27:36 crc kubenswrapper[4780]: I1205 08:27:36.283757 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:37 crc kubenswrapper[4780]: I1205 08:27:37.092755 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Dec 05 08:27:37 crc kubenswrapper[4780]: I1205 08:27:37.488683 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd","Type":"ContainerStarted","Data":"a4b6f458f0c33095f7bbc7b81c47d033e4c52e0eec6268ae4e1fe006749b9049"} Dec 05 08:27:37 crc kubenswrapper[4780]: I1205 08:27:37.776069 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 08:27:38 crc kubenswrapper[4780]: I1205 08:27:38.513557 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd","Type":"ContainerStarted","Data":"7edf9bc332ee8fa80427b720075e5283074a7d91240ff9bce1b278c8c4a11ec8"} Dec 05 08:27:38 crc kubenswrapper[4780]: I1205 08:27:38.517700 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7e6c9d7-2431-442d-82c9-aab59434cda1","Type":"ContainerStarted","Data":"4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4"} Dec 05 08:27:38 crc kubenswrapper[4780]: I1205 08:27:38.517918 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="ceilometer-central-agent" containerID="cri-o://6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b" gracePeriod=30 Dec 05 08:27:38 crc kubenswrapper[4780]: I1205 08:27:38.518238 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 08:27:38 crc kubenswrapper[4780]: I1205 08:27:38.518654 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="proxy-httpd" containerID="cri-o://4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4" gracePeriod=30 Dec 05 08:27:38 crc kubenswrapper[4780]: I1205 08:27:38.518749 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="sg-core" containerID="cri-o://0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89" gracePeriod=30 Dec 05 08:27:38 crc kubenswrapper[4780]: I1205 08:27:38.518815 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="ceilometer-notification-agent" containerID="cri-o://eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b" gracePeriod=30 Dec 05 08:27:38 crc kubenswrapper[4780]: I1205 08:27:38.555641 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.338299547 podStartE2EDuration="6.555613194s" podCreationTimestamp="2025-12-05 08:27:32 +0000 UTC" firstStartedPulling="2025-12-05 08:27:33.560206053 +0000 UTC m=+6087.629722385" lastFinishedPulling="2025-12-05 08:27:37.7775197 +0000 UTC m=+6091.847036032" observedRunningTime="2025-12-05 08:27:38.543571806 +0000 UTC m=+6092.613088158" watchObservedRunningTime="2025-12-05 08:27:38.555613194 +0000 UTC m=+6092.625129526" Dec 05 08:27:39 crc kubenswrapper[4780]: I1205 08:27:39.531359 4780 generic.go:334] "Generic (PLEG): container finished" podID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerID="4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4" exitCode=0 Dec 05 08:27:39 crc kubenswrapper[4780]: I1205 08:27:39.531727 4780 generic.go:334] "Generic (PLEG): container finished" podID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerID="0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89" exitCode=2 Dec 05 08:27:39 crc kubenswrapper[4780]: I1205 08:27:39.531746 4780 generic.go:334] "Generic (PLEG): container finished" podID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerID="eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b" exitCode=0 Dec 05 08:27:39 crc kubenswrapper[4780]: I1205 08:27:39.531787 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7e6c9d7-2431-442d-82c9-aab59434cda1","Type":"ContainerDied","Data":"4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4"} Dec 05 08:27:39 crc kubenswrapper[4780]: I1205 08:27:39.531824 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7e6c9d7-2431-442d-82c9-aab59434cda1","Type":"ContainerDied","Data":"0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89"} Dec 05 08:27:39 crc kubenswrapper[4780]: I1205 08:27:39.531838 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7e6c9d7-2431-442d-82c9-aab59434cda1","Type":"ContainerDied","Data":"eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b"} Dec 05 08:27:41 crc kubenswrapper[4780]: I1205 08:27:41.554571 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd","Type":"ContainerStarted","Data":"80caa0654acee6de525ced4b204f087e939ddf4f6c18a7a5bdd42cf9de99108b"} Dec 05 08:27:41 crc kubenswrapper[4780]: I1205 08:27:41.555315 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-api" containerID="cri-o://b30625132c1805b8c864cba2d9b642890a7c5b1c9f0c937f414ac3b7b10c753d" gracePeriod=30 Dec 05 08:27:41 crc kubenswrapper[4780]: I1205 08:27:41.555963 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-listener" containerID="cri-o://80caa0654acee6de525ced4b204f087e939ddf4f6c18a7a5bdd42cf9de99108b" gracePeriod=30 Dec 05 08:27:41 crc kubenswrapper[4780]: I1205 08:27:41.556024 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-notifier" containerID="cri-o://7edf9bc332ee8fa80427b720075e5283074a7d91240ff9bce1b278c8c4a11ec8" gracePeriod=30 Dec 05 08:27:41 crc kubenswrapper[4780]: I1205 08:27:41.556070 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-evaluator" containerID="cri-o://a4b6f458f0c33095f7bbc7b81c47d033e4c52e0eec6268ae4e1fe006749b9049" gracePeriod=30 Dec 05 08:27:41 crc kubenswrapper[4780]: I1205 08:27:41.589507 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.999600104 podStartE2EDuration="8.589480032s" podCreationTimestamp="2025-12-05 08:27:33 +0000 UTC" firstStartedPulling="2025-12-05 08:27:34.284816732 +0000 UTC m=+6088.354333064" lastFinishedPulling="2025-12-05 08:27:40.87469666 +0000 UTC m=+6094.944212992" observedRunningTime="2025-12-05 08:27:41.57766049 +0000 UTC m=+6095.647176842" watchObservedRunningTime="2025-12-05 08:27:41.589480032 +0000 UTC m=+6095.658996364" Dec 05 08:27:42 crc kubenswrapper[4780]: I1205 08:27:42.574440 4780 generic.go:334] "Generic (PLEG): container finished" podID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerID="a4b6f458f0c33095f7bbc7b81c47d033e4c52e0eec6268ae4e1fe006749b9049" exitCode=0 Dec 05 08:27:42 crc kubenswrapper[4780]: I1205 08:27:42.574753 4780 generic.go:334] "Generic (PLEG): container finished" podID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerID="b30625132c1805b8c864cba2d9b642890a7c5b1c9f0c937f414ac3b7b10c753d" exitCode=0 Dec 05 08:27:42 crc kubenswrapper[4780]: I1205 08:27:42.574538 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd","Type":"ContainerDied","Data":"a4b6f458f0c33095f7bbc7b81c47d033e4c52e0eec6268ae4e1fe006749b9049"} Dec 05 08:27:42 crc kubenswrapper[4780]: I1205 08:27:42.574794 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd","Type":"ContainerDied","Data":"b30625132c1805b8c864cba2d9b642890a7c5b1c9f0c937f414ac3b7b10c753d"} Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.026356 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.125663 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-run-httpd\") pod \"b7e6c9d7-2431-442d-82c9-aab59434cda1\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.126087 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-scripts\") pod \"b7e6c9d7-2431-442d-82c9-aab59434cda1\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.126191 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-sg-core-conf-yaml\") pod \"b7e6c9d7-2431-442d-82c9-aab59434cda1\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.126185 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b7e6c9d7-2431-442d-82c9-aab59434cda1" (UID: "b7e6c9d7-2431-442d-82c9-aab59434cda1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.126214 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-config-data\") pod \"b7e6c9d7-2431-442d-82c9-aab59434cda1\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.126280 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-log-httpd\") pod \"b7e6c9d7-2431-442d-82c9-aab59434cda1\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.126364 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-combined-ca-bundle\") pod \"b7e6c9d7-2431-442d-82c9-aab59434cda1\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.126501 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c44p5\" (UniqueName: \"kubernetes.io/projected/b7e6c9d7-2431-442d-82c9-aab59434cda1-kube-api-access-c44p5\") pod \"b7e6c9d7-2431-442d-82c9-aab59434cda1\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.126581 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-ceilometer-tls-certs\") pod \"b7e6c9d7-2431-442d-82c9-aab59434cda1\" (UID: \"b7e6c9d7-2431-442d-82c9-aab59434cda1\") " Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.127200 4780 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.127543 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b7e6c9d7-2431-442d-82c9-aab59434cda1" (UID: "b7e6c9d7-2431-442d-82c9-aab59434cda1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.140756 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:27:43 crc kubenswrapper[4780]: E1205 08:27:43.141040 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.151142 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e6c9d7-2431-442d-82c9-aab59434cda1-kube-api-access-c44p5" (OuterVolumeSpecName: "kube-api-access-c44p5") pod "b7e6c9d7-2431-442d-82c9-aab59434cda1" (UID: "b7e6c9d7-2431-442d-82c9-aab59434cda1"). InnerVolumeSpecName "kube-api-access-c44p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.151138 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-scripts" (OuterVolumeSpecName: "scripts") pod "b7e6c9d7-2431-442d-82c9-aab59434cda1" (UID: "b7e6c9d7-2431-442d-82c9-aab59434cda1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.233805 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.233842 4780 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b7e6c9d7-2431-442d-82c9-aab59434cda1-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.233852 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c44p5\" (UniqueName: \"kubernetes.io/projected/b7e6c9d7-2431-442d-82c9-aab59434cda1-kube-api-access-c44p5\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.241043 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b7e6c9d7-2431-442d-82c9-aab59434cda1" (UID: "b7e6c9d7-2431-442d-82c9-aab59434cda1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.320554 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b7e6c9d7-2431-442d-82c9-aab59434cda1" (UID: "b7e6c9d7-2431-442d-82c9-aab59434cda1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.336219 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.336255 4780 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.389089 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7e6c9d7-2431-442d-82c9-aab59434cda1" (UID: "b7e6c9d7-2431-442d-82c9-aab59434cda1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.415009 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-config-data" (OuterVolumeSpecName: "config-data") pod "b7e6c9d7-2431-442d-82c9-aab59434cda1" (UID: "b7e6c9d7-2431-442d-82c9-aab59434cda1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.438500 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.438542 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e6c9d7-2431-442d-82c9-aab59434cda1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.584916 4780 generic.go:334] "Generic (PLEG): container finished" podID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerID="6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b" exitCode=0 Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.584958 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7e6c9d7-2431-442d-82c9-aab59434cda1","Type":"ContainerDied","Data":"6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b"} Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.584983 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b7e6c9d7-2431-442d-82c9-aab59434cda1","Type":"ContainerDied","Data":"d919b64a1a71176df0adfdebc7bb0907c980419f58d1ddf756b91ad262e7c790"} Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.585000 4780 scope.go:117] "RemoveContainer" containerID="4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.585130 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.611863 4780 scope.go:117] "RemoveContainer" containerID="0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.629153 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.646764 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.650829 4780 scope.go:117] "RemoveContainer" containerID="eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.665854 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:43 crc kubenswrapper[4780]: E1205 08:27:43.666530 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="proxy-httpd" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.666554 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="proxy-httpd" Dec 05 08:27:43 crc kubenswrapper[4780]: E1205 08:27:43.666571 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="ceilometer-notification-agent" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.666578 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="ceilometer-notification-agent" Dec 05 08:27:43 crc kubenswrapper[4780]: E1205 08:27:43.666603 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="sg-core" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.666611 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="sg-core" Dec 05 08:27:43 crc kubenswrapper[4780]: E1205 08:27:43.666643 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="ceilometer-central-agent" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.666649 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="ceilometer-central-agent" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.666912 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="proxy-httpd" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.666948 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="ceilometer-notification-agent" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.666969 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="ceilometer-central-agent" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.666980 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" containerName="sg-core" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.676124 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.676737 4780 scope.go:117] "RemoveContainer" containerID="6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.679893 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.680105 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.680928 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.684833 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.715220 4780 scope.go:117] "RemoveContainer" containerID="4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4" Dec 05 08:27:43 crc kubenswrapper[4780]: E1205 08:27:43.715687 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4\": container with ID starting with 4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4 not found: ID does not exist" containerID="4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.715729 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4"} err="failed to get container status \"4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4\": rpc error: code = NotFound desc = could not find container \"4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4\": container with ID starting with 4984972b280e46c4afa9a1b28fd54a993242e5e51255bb05600db2b89df657f4 not found: ID does not exist" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.715757 4780 scope.go:117] "RemoveContainer" containerID="0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89" Dec 05 08:27:43 crc kubenswrapper[4780]: E1205 08:27:43.716156 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89\": container with ID starting with 0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89 not found: ID does not exist" containerID="0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.716190 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89"} err="failed to get container status \"0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89\": rpc error: code = NotFound desc = could not find container \"0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89\": container with ID starting with 0e02da6c1133e4d1873d920df6344a34b67b6ccb6fc6ff2aa6b242c28cd05f89 not found: ID does not exist" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.716212 4780 scope.go:117] "RemoveContainer" containerID="eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b" Dec 05 08:27:43 crc kubenswrapper[4780]: E1205 08:27:43.716485 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b\": container with ID starting with eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b not found: ID does not exist" containerID="eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.716512 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b"} err="failed to get container status \"eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b\": rpc error: code = NotFound desc = could not find container \"eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b\": container with ID starting with eb91512873ce6b6f74bae60a32a97e7fd9fdffa18280047db5bda1a8d350a26b not found: ID does not exist" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.716525 4780 scope.go:117] "RemoveContainer" containerID="6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b" Dec 05 08:27:43 crc kubenswrapper[4780]: E1205 08:27:43.716924 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b\": container with ID starting with 6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b not found: ID does not exist" containerID="6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.716952 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b"} err="failed to get container status \"6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b\": rpc error: code = NotFound desc = could not find container \"6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b\": container with ID starting with 6d65263677ecbb794d5dd873eb3339191017bb31850204fd2fa4005f9794500b not found: ID does not exist" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.744504 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.744559 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-log-httpd\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.744602 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-scripts\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.744634 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5g8r\" (UniqueName: \"kubernetes.io/projected/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-kube-api-access-j5g8r\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.744729 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-config-data\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.744786 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.744840 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-run-httpd\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.744871 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.847244 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-run-httpd\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.847597 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.847674 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.847692 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-log-httpd\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.847723 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-scripts\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.847746 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5g8r\" (UniqueName: \"kubernetes.io/projected/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-kube-api-access-j5g8r\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.847821 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-config-data\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.847864 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.848831 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-log-httpd\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.849343 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-run-httpd\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.853608 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-scripts\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.854100 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.854766 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.855650 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-config-data\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.857191 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:43 crc kubenswrapper[4780]: I1205 08:27:43.866860 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5g8r\" (UniqueName: \"kubernetes.io/projected/5a2ccc7e-6a76-4f09-893f-243fe7cee6d2-kube-api-access-j5g8r\") pod \"ceilometer-0\" (UID: \"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2\") " pod="openstack/ceilometer-0" Dec 05 08:27:44 crc kubenswrapper[4780]: I1205 08:27:44.007115 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 08:27:44 crc kubenswrapper[4780]: I1205 08:27:44.158949 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7e6c9d7-2431-442d-82c9-aab59434cda1" path="/var/lib/kubelet/pods/b7e6c9d7-2431-442d-82c9-aab59434cda1/volumes" Dec 05 08:27:44 crc kubenswrapper[4780]: W1205 08:27:44.472220 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a2ccc7e_6a76_4f09_893f_243fe7cee6d2.slice/crio-cf737d2a6d6264546d67ad0fdad6be5bfe5f5cd4eb3d4961d743eebfddd32f50 WatchSource:0}: Error finding container cf737d2a6d6264546d67ad0fdad6be5bfe5f5cd4eb3d4961d743eebfddd32f50: Status 404 returned error can't find the container with id cf737d2a6d6264546d67ad0fdad6be5bfe5f5cd4eb3d4961d743eebfddd32f50 Dec 05 08:27:44 crc kubenswrapper[4780]: I1205 08:27:44.473185 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 08:27:44 crc kubenswrapper[4780]: I1205 08:27:44.596843 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2","Type":"ContainerStarted","Data":"cf737d2a6d6264546d67ad0fdad6be5bfe5f5cd4eb3d4961d743eebfddd32f50"} Dec 05 08:27:45 crc kubenswrapper[4780]: I1205 08:27:45.612897 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2","Type":"ContainerStarted","Data":"8f5b40607492190962d4d01d81bca1ef0a4f23b89babc3e0adb41ed008306df7"} Dec 05 08:27:45 crc kubenswrapper[4780]: I1205 08:27:45.613253 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2","Type":"ContainerStarted","Data":"067f48ec15a7b64cfb2f38cfc5c798ea29938db020c903d53a608260c509cd6b"} Dec 05 08:27:46 crc kubenswrapper[4780]: I1205 08:27:46.625923 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2","Type":"ContainerStarted","Data":"8c9193972f74f5fe651cfa9abc99ac0f59a74aeb9ee548b6d1cf6832caa85b0a"} Dec 05 08:27:47 crc kubenswrapper[4780]: I1205 08:27:47.638631 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a2ccc7e-6a76-4f09-893f-243fe7cee6d2","Type":"ContainerStarted","Data":"d0526f1ea85f491d92166c27221a3ae889fff44591ee4275980b325f1f995c17"} Dec 05 08:27:47 crc kubenswrapper[4780]: I1205 08:27:47.639090 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 08:27:47 crc kubenswrapper[4780]: I1205 08:27:47.676926 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.130376699 podStartE2EDuration="4.676902452s" podCreationTimestamp="2025-12-05 08:27:43 +0000 UTC" firstStartedPulling="2025-12-05 08:27:44.475479047 +0000 UTC m=+6098.544995379" lastFinishedPulling="2025-12-05 08:27:47.0220048 +0000 UTC m=+6101.091521132" observedRunningTime="2025-12-05 08:27:47.664189556 +0000 UTC m=+6101.733705889" watchObservedRunningTime="2025-12-05 08:27:47.676902452 +0000 UTC m=+6101.746418784" Dec 05 08:27:56 crc kubenswrapper[4780]: I1205 08:27:56.738360 4780 generic.go:334] "Generic (PLEG): container finished" podID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerID="7edf9bc332ee8fa80427b720075e5283074a7d91240ff9bce1b278c8c4a11ec8" exitCode=0 Dec 05 08:27:56 crc kubenswrapper[4780]: I1205 08:27:56.738433 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd","Type":"ContainerDied","Data":"7edf9bc332ee8fa80427b720075e5283074a7d91240ff9bce1b278c8c4a11ec8"} Dec 05 08:27:58 crc kubenswrapper[4780]: I1205 08:27:58.139693 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:27:58 crc kubenswrapper[4780]: E1205 08:27:58.153995 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:28:10 crc kubenswrapper[4780]: I1205 08:28:10.045678 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8pbvc"] Dec 05 08:28:10 crc kubenswrapper[4780]: I1205 08:28:10.058033 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ccf1-account-create-update-ztgdc"] Dec 05 08:28:10 crc kubenswrapper[4780]: I1205 08:28:10.066954 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8pbvc"] Dec 05 08:28:10 crc kubenswrapper[4780]: I1205 08:28:10.075752 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ccf1-account-create-update-ztgdc"] Dec 05 08:28:10 crc kubenswrapper[4780]: I1205 08:28:10.152330 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b4dce0-7ba9-44a4-8a74-69df0962589c" path="/var/lib/kubelet/pods/87b4dce0-7ba9-44a4-8a74-69df0962589c/volumes" Dec 05 08:28:10 crc kubenswrapper[4780]: I1205 08:28:10.152988 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e66d54-de82-4ba3-b098-bef82a296ac1" path="/var/lib/kubelet/pods/c5e66d54-de82-4ba3-b098-bef82a296ac1/volumes" Dec 05 08:28:11 crc kubenswrapper[4780]: I1205 08:28:11.140009 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:28:11 crc kubenswrapper[4780]: E1205 08:28:11.140254 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:28:11 crc kubenswrapper[4780]: I1205 08:28:11.902139 4780 generic.go:334] "Generic (PLEG): container finished" podID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerID="80caa0654acee6de525ced4b204f087e939ddf4f6c18a7a5bdd42cf9de99108b" exitCode=137 Dec 05 08:28:11 crc kubenswrapper[4780]: I1205 08:28:11.902401 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd","Type":"ContainerDied","Data":"80caa0654acee6de525ced4b204f087e939ddf4f6c18a7a5bdd42cf9de99108b"} Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.027625 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.157086 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-combined-ca-bundle\") pod \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.157143 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-scripts\") pod \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.157426 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8c5w\" (UniqueName: \"kubernetes.io/projected/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-kube-api-access-t8c5w\") pod \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.157577 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-config-data\") pod \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\" (UID: \"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd\") " Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.177418 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-kube-api-access-t8c5w" (OuterVolumeSpecName: "kube-api-access-t8c5w") pod "34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" (UID: "34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd"). InnerVolumeSpecName "kube-api-access-t8c5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.177891 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-scripts" (OuterVolumeSpecName: "scripts") pod "34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" (UID: "34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.268808 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8c5w\" (UniqueName: \"kubernetes.io/projected/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-kube-api-access-t8c5w\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.268852 4780 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.454966 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" (UID: "34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.455903 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-config-data" (OuterVolumeSpecName: "config-data") pod "34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" (UID: "34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.486347 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.486394 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.916116 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd","Type":"ContainerDied","Data":"4ef4780b757fb1b5f17eb762277b4759b467c927d7e28fd1384e7d7c3c2fd3df"} Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.916407 4780 scope.go:117] "RemoveContainer" containerID="80caa0654acee6de525ced4b204f087e939ddf4f6c18a7a5bdd42cf9de99108b" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.916212 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.953892 4780 scope.go:117] "RemoveContainer" containerID="7edf9bc332ee8fa80427b720075e5283074a7d91240ff9bce1b278c8c4a11ec8" Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.958942 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.972060 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Dec 05 08:28:12 crc kubenswrapper[4780]: I1205 08:28:12.983355 4780 scope.go:117] "RemoveContainer" containerID="a4b6f458f0c33095f7bbc7b81c47d033e4c52e0eec6268ae4e1fe006749b9049" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.002124 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 05 08:28:13 crc kubenswrapper[4780]: E1205 08:28:13.002955 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-evaluator" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.002976 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-evaluator" Dec 05 08:28:13 crc kubenswrapper[4780]: E1205 08:28:13.003002 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-api" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.003008 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-api" Dec 05 08:28:13 crc kubenswrapper[4780]: E1205 08:28:13.003032 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-listener" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.003038 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-listener" Dec 05 08:28:13 crc kubenswrapper[4780]: E1205 08:28:13.003048 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-notifier" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.003054 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-notifier" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.003240 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-evaluator" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.003256 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-listener" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.003269 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-notifier" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.003282 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" containerName="aodh-api" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.005285 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.017349 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.020421 4780 scope.go:117] "RemoveContainer" containerID="b30625132c1805b8c864cba2d9b642890a7c5b1c9f0c937f414ac3b7b10c753d" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.021565 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.021790 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.022000 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.022256 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.025058 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-gc8bn" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.097718 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-scripts\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.097785 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-config-data\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.097837 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh8fp\" (UniqueName: \"kubernetes.io/projected/22765156-6f4d-420a-a071-68e4c7eae696-kube-api-access-gh8fp\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.097854 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-internal-tls-certs\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.097924 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-combined-ca-bundle\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.097970 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-public-tls-certs\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.200135 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-scripts\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.200197 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-config-data\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.200254 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh8fp\" (UniqueName: \"kubernetes.io/projected/22765156-6f4d-420a-a071-68e4c7eae696-kube-api-access-gh8fp\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.200274 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-internal-tls-certs\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.200321 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-combined-ca-bundle\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.200361 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-public-tls-certs\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.206403 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-scripts\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.206638 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-combined-ca-bundle\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.206826 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-public-tls-certs\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.207413 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-internal-tls-certs\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.210062 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22765156-6f4d-420a-a071-68e4c7eae696-config-data\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.219861 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh8fp\" (UniqueName: \"kubernetes.io/projected/22765156-6f4d-420a-a071-68e4c7eae696-kube-api-access-gh8fp\") pod \"aodh-0\" (UID: \"22765156-6f4d-420a-a071-68e4c7eae696\") " pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.345704 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.879835 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 05 08:28:13 crc kubenswrapper[4780]: I1205 08:28:13.959314 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"22765156-6f4d-420a-a071-68e4c7eae696","Type":"ContainerStarted","Data":"130dbff443661b0ca8b19f2d9142ddf67e0b544bb3befbd80ab66dc1db8ea8f7"} Dec 05 08:28:14 crc kubenswrapper[4780]: I1205 08:28:14.020475 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 08:28:14 crc kubenswrapper[4780]: I1205 08:28:14.150987 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd" path="/var/lib/kubelet/pods/34d0b9a1-9b51-48a0-8e1e-bf7cdf39dbbd/volumes" Dec 05 08:28:14 crc kubenswrapper[4780]: I1205 08:28:14.973286 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"22765156-6f4d-420a-a071-68e4c7eae696","Type":"ContainerStarted","Data":"ebc82c98bb5df5c0adaa9d041b5503d64c9599e2eea11783edef34ca319c3975"} Dec 05 08:28:14 crc kubenswrapper[4780]: I1205 08:28:14.973644 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"22765156-6f4d-420a-a071-68e4c7eae696","Type":"ContainerStarted","Data":"eb0d251a8f318120f661ad59e634500052f6b96ab594ee9559c5f446169182cb"} Dec 05 08:28:15 crc kubenswrapper[4780]: I1205 08:28:15.987009 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"22765156-6f4d-420a-a071-68e4c7eae696","Type":"ContainerStarted","Data":"90a5c990b1e1bf28f351b31d097874e6e539dd5f612859c1a93e39a3f28856e8"} Dec 05 08:28:15 crc kubenswrapper[4780]: I1205 08:28:15.987316 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"22765156-6f4d-420a-a071-68e4c7eae696","Type":"ContainerStarted","Data":"d1a6fd1bd3676a97d5bf132dec3c994fdb837b4289d9f977e271903ba161c5f2"} Dec 05 08:28:16 crc kubenswrapper[4780]: I1205 08:28:16.011000 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.529680539 podStartE2EDuration="4.010982789s" podCreationTimestamp="2025-12-05 08:28:12 +0000 UTC" firstStartedPulling="2025-12-05 08:28:13.899023305 +0000 UTC m=+6127.968539637" lastFinishedPulling="2025-12-05 08:28:15.380325555 +0000 UTC m=+6129.449841887" observedRunningTime="2025-12-05 08:28:16.007620157 +0000 UTC m=+6130.077136489" watchObservedRunningTime="2025-12-05 08:28:16.010982789 +0000 UTC m=+6130.080499121" Dec 05 08:28:20 crc kubenswrapper[4780]: I1205 08:28:20.983884 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77c6fc98b9-rdv6k"] Dec 05 08:28:20 crc kubenswrapper[4780]: I1205 08:28:20.986182 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:20 crc kubenswrapper[4780]: I1205 08:28:20.988289 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.011626 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77c6fc98b9-rdv6k"] Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.072873 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-openstack-cell1\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.072987 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9jfk\" (UniqueName: \"kubernetes.io/projected/bc08d7d4-2666-48c1-8419-ca99ef102004-kube-api-access-x9jfk\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.073028 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-nb\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.073156 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-dns-svc\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.073263 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-config\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.073981 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-sb\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.175733 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-sb\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.175871 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-openstack-cell1\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.176018 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9jfk\" (UniqueName: \"kubernetes.io/projected/bc08d7d4-2666-48c1-8419-ca99ef102004-kube-api-access-x9jfk\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.176065 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-nb\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.176139 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-dns-svc\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.176184 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-config\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.177022 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-openstack-cell1\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.177053 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-config\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.177090 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-dns-svc\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.177087 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-nb\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.177451 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-sb\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.200146 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9jfk\" (UniqueName: \"kubernetes.io/projected/bc08d7d4-2666-48c1-8419-ca99ef102004-kube-api-access-x9jfk\") pod \"dnsmasq-dns-77c6fc98b9-rdv6k\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.310524 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:21 crc kubenswrapper[4780]: I1205 08:28:21.811050 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77c6fc98b9-rdv6k"] Dec 05 08:28:21 crc kubenswrapper[4780]: W1205 08:28:21.813490 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc08d7d4_2666_48c1_8419_ca99ef102004.slice/crio-f003c8bc3139e78cdcff432da686f531e0fd3b64837b2d7985375dab0e2dd2cc WatchSource:0}: Error finding container f003c8bc3139e78cdcff432da686f531e0fd3b64837b2d7985375dab0e2dd2cc: Status 404 returned error can't find the container with id f003c8bc3139e78cdcff432da686f531e0fd3b64837b2d7985375dab0e2dd2cc Dec 05 08:28:22 crc kubenswrapper[4780]: I1205 08:28:22.077650 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" event={"ID":"bc08d7d4-2666-48c1-8419-ca99ef102004","Type":"ContainerStarted","Data":"f003c8bc3139e78cdcff432da686f531e0fd3b64837b2d7985375dab0e2dd2cc"} Dec 05 08:28:23 crc kubenswrapper[4780]: I1205 08:28:23.088105 4780 generic.go:334] "Generic (PLEG): container finished" podID="bc08d7d4-2666-48c1-8419-ca99ef102004" containerID="146eb5173ec87d438dc1c9267767f060fe762efbfd599b2f371f18dcf451fad7" exitCode=0 Dec 05 08:28:23 crc kubenswrapper[4780]: I1205 08:28:23.088230 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" event={"ID":"bc08d7d4-2666-48c1-8419-ca99ef102004","Type":"ContainerDied","Data":"146eb5173ec87d438dc1c9267767f060fe762efbfd599b2f371f18dcf451fad7"} Dec 05 08:28:24 crc kubenswrapper[4780]: I1205 08:28:24.101156 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" event={"ID":"bc08d7d4-2666-48c1-8419-ca99ef102004","Type":"ContainerStarted","Data":"23109c988fe9574ff86ed3aa8bbcef7031a8b62617b37ad0a533be951448d1ec"} Dec 05 08:28:24 crc kubenswrapper[4780]: I1205 08:28:24.101670 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:24 crc kubenswrapper[4780]: I1205 08:28:24.126758 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" podStartSLOduration=4.126730688 podStartE2EDuration="4.126730688s" podCreationTimestamp="2025-12-05 08:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:28:24.119299326 +0000 UTC m=+6138.188815658" watchObservedRunningTime="2025-12-05 08:28:24.126730688 +0000 UTC m=+6138.196247020" Dec 05 08:28:24 crc kubenswrapper[4780]: I1205 08:28:24.139789 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:28:24 crc kubenswrapper[4780]: E1205 08:28:24.140113 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:28:30 crc kubenswrapper[4780]: I1205 08:28:30.997660 4780 scope.go:117] "RemoveContainer" containerID="38ee1f784b056fd21096d24b7de81d7376077792b5597ba752312dfe8d2cff49" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.025140 4780 scope.go:117] "RemoveContainer" containerID="a0a68414e6cacab39e8c0e6d0f3246f3175a6834f26b0d28c757a8cb9f2ceab5" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.312829 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.369784 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7976bdf7b5-kctmz"] Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.370082 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" podUID="aaa7f853-e31b-4193-a8fd-761904d6671e" containerName="dnsmasq-dns" containerID="cri-o://966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c" gracePeriod=10 Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.681316 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d8b8bbc9-ptbht"] Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.685487 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.691294 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d8b8bbc9-ptbht"] Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.719601 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-openstack-cell1\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.719658 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-config\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.719688 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-dns-svc\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.719730 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.719765 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.719816 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfv4r\" (UniqueName: \"kubernetes.io/projected/e569a1ba-a14b-4d20-b449-7eaea024d0e6-kube-api-access-lfv4r\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.824376 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-openstack-cell1\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.824457 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-config\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.824507 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-dns-svc\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.824573 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.824625 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.824706 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfv4r\" (UniqueName: \"kubernetes.io/projected/e569a1ba-a14b-4d20-b449-7eaea024d0e6-kube-api-access-lfv4r\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.825574 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-config\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.825907 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.825907 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.826030 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-dns-svc\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.826312 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/e569a1ba-a14b-4d20-b449-7eaea024d0e6-openstack-cell1\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.844593 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfv4r\" (UniqueName: \"kubernetes.io/projected/e569a1ba-a14b-4d20-b449-7eaea024d0e6-kube-api-access-lfv4r\") pod \"dnsmasq-dns-6d8b8bbc9-ptbht\" (UID: \"e569a1ba-a14b-4d20-b449-7eaea024d0e6\") " pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:31 crc kubenswrapper[4780]: I1205 08:28:31.908183 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.014676 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.027859 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-nb\") pod \"aaa7f853-e31b-4193-a8fd-761904d6671e\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.027989 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-config\") pod \"aaa7f853-e31b-4193-a8fd-761904d6671e\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.028028 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-sb\") pod \"aaa7f853-e31b-4193-a8fd-761904d6671e\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.028084 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwbnw\" (UniqueName: \"kubernetes.io/projected/aaa7f853-e31b-4193-a8fd-761904d6671e-kube-api-access-nwbnw\") pod \"aaa7f853-e31b-4193-a8fd-761904d6671e\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.028228 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-dns-svc\") pod \"aaa7f853-e31b-4193-a8fd-761904d6671e\" (UID: \"aaa7f853-e31b-4193-a8fd-761904d6671e\") " Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.032773 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa7f853-e31b-4193-a8fd-761904d6671e-kube-api-access-nwbnw" (OuterVolumeSpecName: "kube-api-access-nwbnw") pod "aaa7f853-e31b-4193-a8fd-761904d6671e" (UID: "aaa7f853-e31b-4193-a8fd-761904d6671e"). InnerVolumeSpecName "kube-api-access-nwbnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.081808 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-config" (OuterVolumeSpecName: "config") pod "aaa7f853-e31b-4193-a8fd-761904d6671e" (UID: "aaa7f853-e31b-4193-a8fd-761904d6671e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.084771 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aaa7f853-e31b-4193-a8fd-761904d6671e" (UID: "aaa7f853-e31b-4193-a8fd-761904d6671e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.090177 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aaa7f853-e31b-4193-a8fd-761904d6671e" (UID: "aaa7f853-e31b-4193-a8fd-761904d6671e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.115630 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aaa7f853-e31b-4193-a8fd-761904d6671e" (UID: "aaa7f853-e31b-4193-a8fd-761904d6671e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.132484 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.132521 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.132531 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.132543 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwbnw\" (UniqueName: \"kubernetes.io/projected/aaa7f853-e31b-4193-a8fd-761904d6671e-kube-api-access-nwbnw\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.132557 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaa7f853-e31b-4193-a8fd-761904d6671e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.199349 4780 generic.go:334] "Generic (PLEG): container finished" podID="aaa7f853-e31b-4193-a8fd-761904d6671e" containerID="966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c" exitCode=0 Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.199426 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" event={"ID":"aaa7f853-e31b-4193-a8fd-761904d6671e","Type":"ContainerDied","Data":"966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c"} Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.199478 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" event={"ID":"aaa7f853-e31b-4193-a8fd-761904d6671e","Type":"ContainerDied","Data":"f43080646a66ec609481503cfdf90ea48fd9baaf0fca017a197946caea823fbb"} Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.199510 4780 scope.go:117] "RemoveContainer" containerID="966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.199689 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7976bdf7b5-kctmz" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.242330 4780 scope.go:117] "RemoveContainer" containerID="cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.249273 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7976bdf7b5-kctmz"] Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.259276 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7976bdf7b5-kctmz"] Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.269995 4780 scope.go:117] "RemoveContainer" containerID="966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c" Dec 05 08:28:32 crc kubenswrapper[4780]: E1205 08:28:32.272772 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c\": container with ID starting with 966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c not found: ID does not exist" containerID="966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.272802 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c"} err="failed to get container status \"966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c\": rpc error: code = NotFound desc = could not find container \"966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c\": container with ID starting with 966bdb156a85e0c7fb5a10a87fa4f2e8c2a479e9740f02e0be15ceebf3ed1e2c not found: ID does not exist" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.272827 4780 scope.go:117] "RemoveContainer" containerID="cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec" Dec 05 08:28:32 crc kubenswrapper[4780]: E1205 08:28:32.273275 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec\": container with ID starting with cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec not found: ID does not exist" containerID="cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.273301 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec"} err="failed to get container status \"cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec\": rpc error: code = NotFound desc = could not find container \"cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec\": container with ID starting with cbdf5107b17fe39740b81992b3a836479de9cd8e11503eee94e9404fa1228eec not found: ID does not exist" Dec 05 08:28:32 crc kubenswrapper[4780]: I1205 08:28:32.509741 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d8b8bbc9-ptbht"] Dec 05 08:28:33 crc kubenswrapper[4780]: I1205 08:28:33.599231 4780 generic.go:334] "Generic (PLEG): container finished" podID="e569a1ba-a14b-4d20-b449-7eaea024d0e6" containerID="324ab25d995a6c006666f3ddb31918ca53f2a9d4285857289f50d623738bbf9c" exitCode=0 Dec 05 08:28:33 crc kubenswrapper[4780]: I1205 08:28:33.599412 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" event={"ID":"e569a1ba-a14b-4d20-b449-7eaea024d0e6","Type":"ContainerDied","Data":"324ab25d995a6c006666f3ddb31918ca53f2a9d4285857289f50d623738bbf9c"} Dec 05 08:28:33 crc kubenswrapper[4780]: I1205 08:28:33.599755 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" event={"ID":"e569a1ba-a14b-4d20-b449-7eaea024d0e6","Type":"ContainerStarted","Data":"4b8ff7235e2b80016f65ac036ac252de2973acced3835da04a76e738a224cc08"} Dec 05 08:28:34 crc kubenswrapper[4780]: I1205 08:28:34.165934 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa7f853-e31b-4193-a8fd-761904d6671e" path="/var/lib/kubelet/pods/aaa7f853-e31b-4193-a8fd-761904d6671e/volumes" Dec 05 08:28:34 crc kubenswrapper[4780]: I1205 08:28:34.610620 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" event={"ID":"e569a1ba-a14b-4d20-b449-7eaea024d0e6","Type":"ContainerStarted","Data":"37bf1487568414dfaa117b22c17c31fd1cdec51a745572ab727a532bfc2bf91c"} Dec 05 08:28:34 crc kubenswrapper[4780]: I1205 08:28:34.610841 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:34 crc kubenswrapper[4780]: I1205 08:28:34.633763 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" podStartSLOduration=3.633742638 podStartE2EDuration="3.633742638s" podCreationTimestamp="2025-12-05 08:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 08:28:34.628848815 +0000 UTC m=+6148.698365147" watchObservedRunningTime="2025-12-05 08:28:34.633742638 +0000 UTC m=+6148.703258970" Dec 05 08:28:35 crc kubenswrapper[4780]: I1205 08:28:35.042805 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mwwvl"] Dec 05 08:28:35 crc kubenswrapper[4780]: I1205 08:28:35.054716 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mwwvl"] Dec 05 08:28:36 crc kubenswrapper[4780]: I1205 08:28:36.150657 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4843dc8e-84df-44ba-a1f8-8c626bac3df8" path="/var/lib/kubelet/pods/4843dc8e-84df-44ba-a1f8-8c626bac3df8/volumes" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.139806 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:28:37 crc kubenswrapper[4780]: E1205 08:28:37.140093 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.484053 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps"] Dec 05 08:28:37 crc kubenswrapper[4780]: E1205 08:28:37.484979 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa7f853-e31b-4193-a8fd-761904d6671e" containerName="init" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.484996 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa7f853-e31b-4193-a8fd-761904d6671e" containerName="init" Dec 05 08:28:37 crc kubenswrapper[4780]: E1205 08:28:37.485034 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa7f853-e31b-4193-a8fd-761904d6671e" containerName="dnsmasq-dns" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.485040 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa7f853-e31b-4193-a8fd-761904d6671e" containerName="dnsmasq-dns" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.485237 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa7f853-e31b-4193-a8fd-761904d6671e" containerName="dnsmasq-dns" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.486009 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.489286 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.489947 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.490918 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.496277 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.508667 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps"] Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.551270 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.551444 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.551551 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7l9\" (UniqueName: \"kubernetes.io/projected/77271347-a925-4f54-84d2-97489c85a5bc-kube-api-access-pw7l9\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.551618 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.654378 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.654608 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.654690 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.654767 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw7l9\" (UniqueName: \"kubernetes.io/projected/77271347-a925-4f54-84d2-97489c85a5bc-kube-api-access-pw7l9\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.660967 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.661118 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.674740 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.675035 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw7l9\" (UniqueName: \"kubernetes.io/projected/77271347-a925-4f54-84d2-97489c85a5bc-kube-api-access-pw7l9\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cphbps\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:37 crc kubenswrapper[4780]: I1205 08:28:37.811377 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:28:38 crc kubenswrapper[4780]: I1205 08:28:38.723107 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps"] Dec 05 08:28:39 crc kubenswrapper[4780]: I1205 08:28:39.659094 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" event={"ID":"77271347-a925-4f54-84d2-97489c85a5bc","Type":"ContainerStarted","Data":"377be7f656a492b4330fce68a1a39fc2565d56fb07730538fbd1941f209d1320"} Dec 05 08:28:42 crc kubenswrapper[4780]: I1205 08:28:42.017101 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d8b8bbc9-ptbht" Dec 05 08:28:42 crc kubenswrapper[4780]: I1205 08:28:42.110440 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77c6fc98b9-rdv6k"] Dec 05 08:28:42 crc kubenswrapper[4780]: I1205 08:28:42.110709 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" podUID="bc08d7d4-2666-48c1-8419-ca99ef102004" containerName="dnsmasq-dns" containerID="cri-o://23109c988fe9574ff86ed3aa8bbcef7031a8b62617b37ad0a533be951448d1ec" gracePeriod=10 Dec 05 08:28:42 crc kubenswrapper[4780]: I1205 08:28:42.729185 4780 generic.go:334] "Generic (PLEG): container finished" podID="bc08d7d4-2666-48c1-8419-ca99ef102004" containerID="23109c988fe9574ff86ed3aa8bbcef7031a8b62617b37ad0a533be951448d1ec" exitCode=0 Dec 05 08:28:42 crc kubenswrapper[4780]: I1205 08:28:42.729257 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" event={"ID":"bc08d7d4-2666-48c1-8419-ca99ef102004","Type":"ContainerDied","Data":"23109c988fe9574ff86ed3aa8bbcef7031a8b62617b37ad0a533be951448d1ec"} Dec 05 08:28:46 crc kubenswrapper[4780]: I1205 08:28:46.311380 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" podUID="bc08d7d4-2666-48c1-8419-ca99ef102004" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.137:5353: connect: connection refused" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.518016 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.555784 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9jfk\" (UniqueName: \"kubernetes.io/projected/bc08d7d4-2666-48c1-8419-ca99ef102004-kube-api-access-x9jfk\") pod \"bc08d7d4-2666-48c1-8419-ca99ef102004\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.556162 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-sb\") pod \"bc08d7d4-2666-48c1-8419-ca99ef102004\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.556707 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-dns-svc\") pod \"bc08d7d4-2666-48c1-8419-ca99ef102004\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.557274 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-openstack-cell1\") pod \"bc08d7d4-2666-48c1-8419-ca99ef102004\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.557324 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-nb\") pod \"bc08d7d4-2666-48c1-8419-ca99ef102004\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.557372 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-config\") pod \"bc08d7d4-2666-48c1-8419-ca99ef102004\" (UID: \"bc08d7d4-2666-48c1-8419-ca99ef102004\") " Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.561381 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc08d7d4-2666-48c1-8419-ca99ef102004-kube-api-access-x9jfk" (OuterVolumeSpecName: "kube-api-access-x9jfk") pod "bc08d7d4-2666-48c1-8419-ca99ef102004" (UID: "bc08d7d4-2666-48c1-8419-ca99ef102004"). InnerVolumeSpecName "kube-api-access-x9jfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.660974 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9jfk\" (UniqueName: \"kubernetes.io/projected/bc08d7d4-2666-48c1-8419-ca99ef102004-kube-api-access-x9jfk\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.662505 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc08d7d4-2666-48c1-8419-ca99ef102004" (UID: "bc08d7d4-2666-48c1-8419-ca99ef102004"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.670209 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc08d7d4-2666-48c1-8419-ca99ef102004" (UID: "bc08d7d4-2666-48c1-8419-ca99ef102004"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.673602 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-config" (OuterVolumeSpecName: "config") pod "bc08d7d4-2666-48c1-8419-ca99ef102004" (UID: "bc08d7d4-2666-48c1-8419-ca99ef102004"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.678656 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc08d7d4-2666-48c1-8419-ca99ef102004" (UID: "bc08d7d4-2666-48c1-8419-ca99ef102004"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.683528 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "bc08d7d4-2666-48c1-8419-ca99ef102004" (UID: "bc08d7d4-2666-48c1-8419-ca99ef102004"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.763119 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.763162 4780 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.763174 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-openstack-cell1\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.763185 4780 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.763196 4780 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc08d7d4-2666-48c1-8419-ca99ef102004-config\") on node \"crc\" DevicePath \"\"" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.806800 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" event={"ID":"77271347-a925-4f54-84d2-97489c85a5bc","Type":"ContainerStarted","Data":"1da659c2b2209c667167c1487316922efd69a5fb155f79afa40176e6b501c409"} Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.816689 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" event={"ID":"bc08d7d4-2666-48c1-8419-ca99ef102004","Type":"ContainerDied","Data":"f003c8bc3139e78cdcff432da686f531e0fd3b64837b2d7985375dab0e2dd2cc"} Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.816746 4780 scope.go:117] "RemoveContainer" containerID="23109c988fe9574ff86ed3aa8bbcef7031a8b62617b37ad0a533be951448d1ec" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.816952 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77c6fc98b9-rdv6k" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.840280 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" podStartSLOduration=2.223179466 podStartE2EDuration="12.840263609s" podCreationTimestamp="2025-12-05 08:28:37 +0000 UTC" firstStartedPulling="2025-12-05 08:28:38.74205504 +0000 UTC m=+6152.811571372" lastFinishedPulling="2025-12-05 08:28:49.359139183 +0000 UTC m=+6163.428655515" observedRunningTime="2025-12-05 08:28:49.829776234 +0000 UTC m=+6163.899292596" watchObservedRunningTime="2025-12-05 08:28:49.840263609 +0000 UTC m=+6163.909779931" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.841310 4780 scope.go:117] "RemoveContainer" containerID="146eb5173ec87d438dc1c9267767f060fe762efbfd599b2f371f18dcf451fad7" Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.860140 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77c6fc98b9-rdv6k"] Dec 05 08:28:49 crc kubenswrapper[4780]: I1205 08:28:49.870677 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77c6fc98b9-rdv6k"] Dec 05 08:28:50 crc kubenswrapper[4780]: I1205 08:28:50.139373 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:28:50 crc kubenswrapper[4780]: E1205 08:28:50.139671 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:28:50 crc kubenswrapper[4780]: I1205 08:28:50.152554 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc08d7d4-2666-48c1-8419-ca99ef102004" path="/var/lib/kubelet/pods/bc08d7d4-2666-48c1-8419-ca99ef102004/volumes" Dec 05 08:29:02 crc kubenswrapper[4780]: I1205 08:29:02.037932 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-tz48s"] Dec 05 08:29:02 crc kubenswrapper[4780]: I1205 08:29:02.047524 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-tz48s"] Dec 05 08:29:02 crc kubenswrapper[4780]: I1205 08:29:02.149545 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67049d16-1624-44fc-9a39-0a9897640c19" path="/var/lib/kubelet/pods/67049d16-1624-44fc-9a39-0a9897640c19/volumes" Dec 05 08:29:02 crc kubenswrapper[4780]: I1205 08:29:02.947657 4780 generic.go:334] "Generic (PLEG): container finished" podID="77271347-a925-4f54-84d2-97489c85a5bc" containerID="1da659c2b2209c667167c1487316922efd69a5fb155f79afa40176e6b501c409" exitCode=0 Dec 05 08:29:02 crc kubenswrapper[4780]: I1205 08:29:02.947723 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" event={"ID":"77271347-a925-4f54-84d2-97489c85a5bc","Type":"ContainerDied","Data":"1da659c2b2209c667167c1487316922efd69a5fb155f79afa40176e6b501c409"} Dec 05 08:29:03 crc kubenswrapper[4780]: I1205 08:29:03.033654 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f270-account-create-update-tn5q9"] Dec 05 08:29:03 crc kubenswrapper[4780]: I1205 08:29:03.044576 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f270-account-create-update-tn5q9"] Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.155573 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1063dd08-b420-4608-8d65-168c51c6ec7a" path="/var/lib/kubelet/pods/1063dd08-b420-4608-8d65-168c51c6ec7a/volumes" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.376136 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.422347 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-inventory\") pod \"77271347-a925-4f54-84d2-97489c85a5bc\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.422415 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-pre-adoption-validation-combined-ca-bundle\") pod \"77271347-a925-4f54-84d2-97489c85a5bc\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.422460 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-ssh-key\") pod \"77271347-a925-4f54-84d2-97489c85a5bc\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.422662 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw7l9\" (UniqueName: \"kubernetes.io/projected/77271347-a925-4f54-84d2-97489c85a5bc-kube-api-access-pw7l9\") pod \"77271347-a925-4f54-84d2-97489c85a5bc\" (UID: \"77271347-a925-4f54-84d2-97489c85a5bc\") " Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.428501 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "77271347-a925-4f54-84d2-97489c85a5bc" (UID: "77271347-a925-4f54-84d2-97489c85a5bc"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.429660 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77271347-a925-4f54-84d2-97489c85a5bc-kube-api-access-pw7l9" (OuterVolumeSpecName: "kube-api-access-pw7l9") pod "77271347-a925-4f54-84d2-97489c85a5bc" (UID: "77271347-a925-4f54-84d2-97489c85a5bc"). InnerVolumeSpecName "kube-api-access-pw7l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.460372 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-inventory" (OuterVolumeSpecName: "inventory") pod "77271347-a925-4f54-84d2-97489c85a5bc" (UID: "77271347-a925-4f54-84d2-97489c85a5bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.467715 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "77271347-a925-4f54-84d2-97489c85a5bc" (UID: "77271347-a925-4f54-84d2-97489c85a5bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.525571 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw7l9\" (UniqueName: \"kubernetes.io/projected/77271347-a925-4f54-84d2-97489c85a5bc-kube-api-access-pw7l9\") on node \"crc\" DevicePath \"\"" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.525646 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.525662 4780 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.525679 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77271347-a925-4f54-84d2-97489c85a5bc-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.970401 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" event={"ID":"77271347-a925-4f54-84d2-97489c85a5bc","Type":"ContainerDied","Data":"377be7f656a492b4330fce68a1a39fc2565d56fb07730538fbd1941f209d1320"} Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.970474 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="377be7f656a492b4330fce68a1a39fc2565d56fb07730538fbd1941f209d1320" Dec 05 08:29:04 crc kubenswrapper[4780]: I1205 08:29:04.970479 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cphbps" Dec 05 08:29:05 crc kubenswrapper[4780]: I1205 08:29:05.140643 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:29:05 crc kubenswrapper[4780]: E1205 08:29:05.141070 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:29:12 crc kubenswrapper[4780]: I1205 08:29:12.029657 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lxtzh"] Dec 05 08:29:12 crc kubenswrapper[4780]: I1205 08:29:12.038824 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lxtzh"] Dec 05 08:29:12 crc kubenswrapper[4780]: I1205 08:29:12.149418 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e102e1-771d-483a-bb67-06a66c885bb6" path="/var/lib/kubelet/pods/90e102e1-771d-483a-bb67-06a66c885bb6/volumes" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.707390 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf"] Dec 05 08:29:15 crc kubenswrapper[4780]: E1205 08:29:15.708567 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc08d7d4-2666-48c1-8419-ca99ef102004" containerName="dnsmasq-dns" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.708603 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc08d7d4-2666-48c1-8419-ca99ef102004" containerName="dnsmasq-dns" Dec 05 08:29:15 crc kubenswrapper[4780]: E1205 08:29:15.708622 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc08d7d4-2666-48c1-8419-ca99ef102004" containerName="init" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.708629 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc08d7d4-2666-48c1-8419-ca99ef102004" containerName="init" Dec 05 08:29:15 crc kubenswrapper[4780]: E1205 08:29:15.708668 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77271347-a925-4f54-84d2-97489c85a5bc" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.708679 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="77271347-a925-4f54-84d2-97489c85a5bc" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.708951 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc08d7d4-2666-48c1-8419-ca99ef102004" containerName="dnsmasq-dns" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.708982 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="77271347-a925-4f54-84d2-97489c85a5bc" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.711301 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.714344 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.714605 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.714969 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.716981 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf"] Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.718212 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.880402 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.880823 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.880988 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.881255 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv6d2\" (UniqueName: \"kubernetes.io/projected/47fba3c8-cdfe-4395-a921-933521a08de8-kube-api-access-zv6d2\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.983721 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv6d2\" (UniqueName: \"kubernetes.io/projected/47fba3c8-cdfe-4395-a921-933521a08de8-kube-api-access-zv6d2\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.983867 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.984026 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.984175 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.989567 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.990096 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:15 crc kubenswrapper[4780]: I1205 08:29:15.995997 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:16 crc kubenswrapper[4780]: I1205 08:29:16.010082 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv6d2\" (UniqueName: \"kubernetes.io/projected/47fba3c8-cdfe-4395-a921-933521a08de8-kube-api-access-zv6d2\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:16 crc kubenswrapper[4780]: I1205 08:29:16.053174 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:29:16 crc kubenswrapper[4780]: I1205 08:29:16.601686 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf"] Dec 05 08:29:17 crc kubenswrapper[4780]: I1205 08:29:17.138430 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:29:17 crc kubenswrapper[4780]: E1205 08:29:17.139003 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:29:17 crc kubenswrapper[4780]: I1205 08:29:17.139562 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" event={"ID":"47fba3c8-cdfe-4395-a921-933521a08de8","Type":"ContainerStarted","Data":"9fa6ec8dd8e3cb0f952fb91540548a78b6517f02d84abf16aa067f83d01767e3"} Dec 05 08:29:18 crc kubenswrapper[4780]: I1205 08:29:18.150086 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" event={"ID":"47fba3c8-cdfe-4395-a921-933521a08de8","Type":"ContainerStarted","Data":"08bd81237a1bc1b7e10e127abe295c189dfbdd17a53c1795ceed26e1d2cdaaaa"} Dec 05 08:29:18 crc kubenswrapper[4780]: I1205 08:29:18.178477 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" podStartSLOduration=2.452531754 podStartE2EDuration="3.178460158s" podCreationTimestamp="2025-12-05 08:29:15 +0000 UTC" firstStartedPulling="2025-12-05 08:29:16.602123653 +0000 UTC m=+6190.671639985" lastFinishedPulling="2025-12-05 08:29:17.328052057 +0000 UTC m=+6191.397568389" observedRunningTime="2025-12-05 08:29:18.174837759 +0000 UTC m=+6192.244354091" watchObservedRunningTime="2025-12-05 08:29:18.178460158 +0000 UTC m=+6192.247976480" Dec 05 08:29:29 crc kubenswrapper[4780]: I1205 08:29:29.139233 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:29:29 crc kubenswrapper[4780]: E1205 08:29:29.139955 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:29:31 crc kubenswrapper[4780]: I1205 08:29:31.187757 4780 scope.go:117] "RemoveContainer" containerID="d9cdc66c27c98e8dd4b2bef0fe45e80d3d8b34eabf923e201dfa07f535eccde2" Dec 05 08:29:31 crc kubenswrapper[4780]: I1205 08:29:31.348630 4780 scope.go:117] "RemoveContainer" containerID="49df14526ac169e1b362aca534bddc1754c1b52f30783d2820588dffe01c7f58" Dec 05 08:29:31 crc kubenswrapper[4780]: I1205 08:29:31.372172 4780 scope.go:117] "RemoveContainer" containerID="a7e5bb34ace7ede46381fdd22b9329fdce20dc174f0a9a59e4296ba0297b6e53" Dec 05 08:29:31 crc kubenswrapper[4780]: I1205 08:29:31.452127 4780 scope.go:117] "RemoveContainer" containerID="65167a0a77abcefc14f54f7848b3ad99a9995a024464c80d3e541109fab9b90b" Dec 05 08:29:31 crc kubenswrapper[4780]: I1205 08:29:31.476459 4780 scope.go:117] "RemoveContainer" containerID="5303de4171d90ef2e51e37ab70c95e47fdbd5177f692ed0623b5c07fdaa8883d" Dec 05 08:29:31 crc kubenswrapper[4780]: I1205 08:29:31.524201 4780 scope.go:117] "RemoveContainer" containerID="bbfaf9795098c6706e5231432baf703fe35f3467c53d6aa2c0a1f7cea4f4a2e1" Dec 05 08:29:31 crc kubenswrapper[4780]: I1205 08:29:31.550464 4780 scope.go:117] "RemoveContainer" containerID="d83ecc3b2bb4d5a94fde96d0ec0406351c8e3665540674ca09e2db0c5baf741f" Dec 05 08:29:31 crc kubenswrapper[4780]: I1205 08:29:31.590644 4780 scope.go:117] "RemoveContainer" containerID="4c024693137c09ce3bf01f0fc53ab53620a418f8e19134aa51f36cbb778e758a" Dec 05 08:29:44 crc kubenswrapper[4780]: I1205 08:29:44.139121 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:29:44 crc kubenswrapper[4780]: E1205 08:29:44.139980 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:29:55 crc kubenswrapper[4780]: I1205 08:29:55.139288 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:29:55 crc kubenswrapper[4780]: E1205 08:29:55.140100 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.155163 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x"] Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.157320 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.160116 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.160381 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.175007 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x"] Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.222278 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-secret-volume\") pod \"collect-profiles-29415390-lzh8x\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.222575 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-config-volume\") pod \"collect-profiles-29415390-lzh8x\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.222656 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h77dn\" (UniqueName: \"kubernetes.io/projected/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-kube-api-access-h77dn\") pod \"collect-profiles-29415390-lzh8x\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.325690 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-config-volume\") pod \"collect-profiles-29415390-lzh8x\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.325827 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h77dn\" (UniqueName: \"kubernetes.io/projected/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-kube-api-access-h77dn\") pod \"collect-profiles-29415390-lzh8x\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.325984 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-secret-volume\") pod \"collect-profiles-29415390-lzh8x\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.326696 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-config-volume\") pod \"collect-profiles-29415390-lzh8x\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.332221 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-secret-volume\") pod \"collect-profiles-29415390-lzh8x\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.343461 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h77dn\" (UniqueName: \"kubernetes.io/projected/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-kube-api-access-h77dn\") pod \"collect-profiles-29415390-lzh8x\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:00 crc kubenswrapper[4780]: I1205 08:30:00.481487 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:01 crc kubenswrapper[4780]: I1205 08:30:01.090352 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x"] Dec 05 08:30:01 crc kubenswrapper[4780]: I1205 08:30:01.580968 4780 generic.go:334] "Generic (PLEG): container finished" podID="7efffc02-4ec6-4e9d-ad06-49d23b3acaa9" containerID="0a326c13ebb6083aa7a208590530b6a0b0c6d214dfafd34d3bf43a4dab46dcb5" exitCode=0 Dec 05 08:30:01 crc kubenswrapper[4780]: I1205 08:30:01.581039 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" event={"ID":"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9","Type":"ContainerDied","Data":"0a326c13ebb6083aa7a208590530b6a0b0c6d214dfafd34d3bf43a4dab46dcb5"} Dec 05 08:30:01 crc kubenswrapper[4780]: I1205 08:30:01.581394 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" event={"ID":"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9","Type":"ContainerStarted","Data":"765343ff96cd9e633955169d16961de1dc411821eada37a923a240c1888c6bbd"} Dec 05 08:30:02 crc kubenswrapper[4780]: I1205 08:30:02.988430 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.090693 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h77dn\" (UniqueName: \"kubernetes.io/projected/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-kube-api-access-h77dn\") pod \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.090780 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-config-volume\") pod \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.090812 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-secret-volume\") pod \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\" (UID: \"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9\") " Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.091916 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-config-volume" (OuterVolumeSpecName: "config-volume") pod "7efffc02-4ec6-4e9d-ad06-49d23b3acaa9" (UID: "7efffc02-4ec6-4e9d-ad06-49d23b3acaa9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.098685 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-kube-api-access-h77dn" (OuterVolumeSpecName: "kube-api-access-h77dn") pod "7efffc02-4ec6-4e9d-ad06-49d23b3acaa9" (UID: "7efffc02-4ec6-4e9d-ad06-49d23b3acaa9"). InnerVolumeSpecName "kube-api-access-h77dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.098873 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7efffc02-4ec6-4e9d-ad06-49d23b3acaa9" (UID: "7efffc02-4ec6-4e9d-ad06-49d23b3acaa9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.194991 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h77dn\" (UniqueName: \"kubernetes.io/projected/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-kube-api-access-h77dn\") on node \"crc\" DevicePath \"\"" Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.195031 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.195041 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.610093 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" event={"ID":"7efffc02-4ec6-4e9d-ad06-49d23b3acaa9","Type":"ContainerDied","Data":"765343ff96cd9e633955169d16961de1dc411821eada37a923a240c1888c6bbd"} Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.610405 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="765343ff96cd9e633955169d16961de1dc411821eada37a923a240c1888c6bbd" Dec 05 08:30:03 crc kubenswrapper[4780]: I1205 08:30:03.610161 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x" Dec 05 08:30:04 crc kubenswrapper[4780]: I1205 08:30:04.064532 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp"] Dec 05 08:30:04 crc kubenswrapper[4780]: I1205 08:30:04.073330 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415345-ht4gp"] Dec 05 08:30:04 crc kubenswrapper[4780]: I1205 08:30:04.207603 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb" path="/var/lib/kubelet/pods/aefa0cdf-54b6-4ac5-b49b-b2e4769c8bfb/volumes" Dec 05 08:30:10 crc kubenswrapper[4780]: I1205 08:30:10.139383 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:30:10 crc kubenswrapper[4780]: E1205 08:30:10.140503 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:30:11 crc kubenswrapper[4780]: I1205 08:30:11.024319 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-sdcrj"] Dec 05 08:30:11 crc kubenswrapper[4780]: I1205 08:30:11.041460 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-sdcrj"] Dec 05 08:30:12 crc kubenswrapper[4780]: I1205 08:30:12.029907 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-52dd-account-create-update-9lpgb"] Dec 05 08:30:12 crc kubenswrapper[4780]: I1205 08:30:12.041670 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-c8fz9"] Dec 05 08:30:12 crc kubenswrapper[4780]: I1205 08:30:12.051068 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-c8fz9"] Dec 05 08:30:12 crc kubenswrapper[4780]: I1205 08:30:12.060671 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-52dd-account-create-update-9lpgb"] Dec 05 08:30:12 crc kubenswrapper[4780]: I1205 08:30:12.149308 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca1dce6-700e-4805-918e-d62bec2c0fb0" path="/var/lib/kubelet/pods/2ca1dce6-700e-4805-918e-d62bec2c0fb0/volumes" Dec 05 08:30:12 crc kubenswrapper[4780]: I1205 08:30:12.149946 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b95e15f-3b23-42f5-b234-1adea6f07bbb" path="/var/lib/kubelet/pods/7b95e15f-3b23-42f5-b234-1adea6f07bbb/volumes" Dec 05 08:30:12 crc kubenswrapper[4780]: I1205 08:30:12.150523 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1ea4e7f-8c3d-4bee-923f-0e22234099be" path="/var/lib/kubelet/pods/e1ea4e7f-8c3d-4bee-923f-0e22234099be/volumes" Dec 05 08:30:13 crc kubenswrapper[4780]: I1205 08:30:13.031121 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-53f7-account-create-update-zgjlq"] Dec 05 08:30:13 crc kubenswrapper[4780]: I1205 08:30:13.042565 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0f9a-account-create-update-8fbjh"] Dec 05 08:30:13 crc kubenswrapper[4780]: I1205 08:30:13.051012 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lp4xp"] Dec 05 08:30:13 crc kubenswrapper[4780]: I1205 08:30:13.060772 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-53f7-account-create-update-zgjlq"] Dec 05 08:30:13 crc kubenswrapper[4780]: I1205 08:30:13.071029 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lp4xp"] Dec 05 08:30:13 crc kubenswrapper[4780]: I1205 08:30:13.078845 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0f9a-account-create-update-8fbjh"] Dec 05 08:30:14 crc kubenswrapper[4780]: I1205 08:30:14.151836 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e507e08-9c40-444f-8615-23285790d5fe" path="/var/lib/kubelet/pods/1e507e08-9c40-444f-8615-23285790d5fe/volumes" Dec 05 08:30:14 crc kubenswrapper[4780]: I1205 08:30:14.152742 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="374b863a-0346-414d-af63-bd8616e4df7e" path="/var/lib/kubelet/pods/374b863a-0346-414d-af63-bd8616e4df7e/volumes" Dec 05 08:30:14 crc kubenswrapper[4780]: I1205 08:30:14.154489 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78078629-9079-4b5a-91d2-0aed37d1e64a" path="/var/lib/kubelet/pods/78078629-9079-4b5a-91d2-0aed37d1e64a/volumes" Dec 05 08:30:24 crc kubenswrapper[4780]: I1205 08:30:24.139067 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:30:24 crc kubenswrapper[4780]: E1205 08:30:24.139962 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:30:31 crc kubenswrapper[4780]: I1205 08:30:31.956461 4780 scope.go:117] "RemoveContainer" containerID="a17cbb8ee2f84b3a5348e800730063f6b423a8d0dff60ff67b542891a475c429" Dec 05 08:30:31 crc kubenswrapper[4780]: I1205 08:30:31.978092 4780 scope.go:117] "RemoveContainer" containerID="2c8f145e1bdffe705280b3ff93f7c255bcc1715f071f81571e86c745098864dd" Dec 05 08:30:32 crc kubenswrapper[4780]: I1205 08:30:32.024696 4780 scope.go:117] "RemoveContainer" containerID="fd2eabb2df3758980a775ca03436a67e44e417c936f138929ccc4d1421a56bd6" Dec 05 08:30:32 crc kubenswrapper[4780]: I1205 08:30:32.073483 4780 scope.go:117] "RemoveContainer" containerID="3cc3b352cea40ae8f1b367624f0102c85c40787080be9d78d0d7053a34870666" Dec 05 08:30:32 crc kubenswrapper[4780]: I1205 08:30:32.116804 4780 scope.go:117] "RemoveContainer" containerID="267a9697c484d4c8e24de5de1c07bd015467210c1cdcf48e30167cf23d25fd1b" Dec 05 08:30:32 crc kubenswrapper[4780]: I1205 08:30:32.169046 4780 scope.go:117] "RemoveContainer" containerID="005673e271a424c05a0b1529cfbae487ba8547b4cd2636e57ccb53c5ae2ac80f" Dec 05 08:30:32 crc kubenswrapper[4780]: I1205 08:30:32.242016 4780 scope.go:117] "RemoveContainer" containerID="30071c05bd2b038754e38b35f67afd80f576a047b818222f2aa03ceb933c2a2a" Dec 05 08:30:35 crc kubenswrapper[4780]: I1205 08:30:35.040954 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-42ghh"] Dec 05 08:30:35 crc kubenswrapper[4780]: I1205 08:30:35.049428 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-42ghh"] Dec 05 08:30:36 crc kubenswrapper[4780]: I1205 08:30:36.150166 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7138ace-d969-4040-aa34-a8f46b7a192f" path="/var/lib/kubelet/pods/d7138ace-d969-4040-aa34-a8f46b7a192f/volumes" Dec 05 08:30:38 crc kubenswrapper[4780]: I1205 08:30:38.139027 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:30:38 crc kubenswrapper[4780]: E1205 08:30:38.139538 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:30:50 crc kubenswrapper[4780]: I1205 08:30:50.039781 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c5525"] Dec 05 08:30:50 crc kubenswrapper[4780]: I1205 08:30:50.051481 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c5525"] Dec 05 08:30:50 crc kubenswrapper[4780]: I1205 08:30:50.149905 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3b1df7-f393-4984-9b7e-a1ceedf8f8da" path="/var/lib/kubelet/pods/1d3b1df7-f393-4984-9b7e-a1ceedf8f8da/volumes" Dec 05 08:30:51 crc kubenswrapper[4780]: I1205 08:30:51.030823 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-r6fqt"] Dec 05 08:30:51 crc kubenswrapper[4780]: I1205 08:30:51.041282 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-r6fqt"] Dec 05 08:30:51 crc kubenswrapper[4780]: I1205 08:30:51.139030 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:30:51 crc kubenswrapper[4780]: E1205 08:30:51.139318 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:30:52 crc kubenswrapper[4780]: I1205 08:30:52.151520 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12be9160-11f9-4c9a-a72f-dc6f26efa7eb" path="/var/lib/kubelet/pods/12be9160-11f9-4c9a-a72f-dc6f26efa7eb/volumes" Dec 05 08:31:04 crc kubenswrapper[4780]: I1205 08:31:04.140478 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:31:05 crc kubenswrapper[4780]: I1205 08:31:05.166218 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"e860808ce9ec2a0ff91f27b5dbad877e30f9e454f036cdc9e5827014863956f9"} Dec 05 08:31:32 crc kubenswrapper[4780]: I1205 08:31:32.393703 4780 scope.go:117] "RemoveContainer" containerID="d8c1e52d05283ae6440a324bf8625d45fe33de7553f218eeda3e5ff1bdddb060" Dec 05 08:31:32 crc kubenswrapper[4780]: I1205 08:31:32.438194 4780 scope.go:117] "RemoveContainer" containerID="acd6e5223bbe7a7a95027b04af00dcb157ee4135568306dd58cab1e54b15a3ac" Dec 05 08:31:32 crc kubenswrapper[4780]: I1205 08:31:32.502522 4780 scope.go:117] "RemoveContainer" containerID="5ca44b429f5ced20a845c94e49194585c0844cf50839e9153ed0cb089c8f4e6c" Dec 05 08:31:38 crc kubenswrapper[4780]: I1205 08:31:38.044560 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bcvz8"] Dec 05 08:31:38 crc kubenswrapper[4780]: I1205 08:31:38.054726 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bcvz8"] Dec 05 08:31:38 crc kubenswrapper[4780]: I1205 08:31:38.150269 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e1ae93-75ba-4434-8a7d-9f999bab7f8a" path="/var/lib/kubelet/pods/02e1ae93-75ba-4434-8a7d-9f999bab7f8a/volumes" Dec 05 08:32:32 crc kubenswrapper[4780]: I1205 08:32:32.632014 4780 scope.go:117] "RemoveContainer" containerID="27dc52d6a2aee452a13408c6a41480b35782838df77c8a5f48350a2ba9843632" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.773093 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q4bgl"] Dec 05 08:32:37 crc kubenswrapper[4780]: E1205 08:32:37.774178 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efffc02-4ec6-4e9d-ad06-49d23b3acaa9" containerName="collect-profiles" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.774192 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efffc02-4ec6-4e9d-ad06-49d23b3acaa9" containerName="collect-profiles" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.774422 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efffc02-4ec6-4e9d-ad06-49d23b3acaa9" containerName="collect-profiles" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.776327 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.786972 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q4bgl"] Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.850414 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-utilities\") pod \"redhat-operators-q4bgl\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.850723 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-catalog-content\") pod \"redhat-operators-q4bgl\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.850864 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk8sj\" (UniqueName: \"kubernetes.io/projected/4bb70562-14a1-447c-8ace-52d8e827693a-kube-api-access-zk8sj\") pod \"redhat-operators-q4bgl\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.953940 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-catalog-content\") pod \"redhat-operators-q4bgl\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.954476 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk8sj\" (UniqueName: \"kubernetes.io/projected/4bb70562-14a1-447c-8ace-52d8e827693a-kube-api-access-zk8sj\") pod \"redhat-operators-q4bgl\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.954425 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-catalog-content\") pod \"redhat-operators-q4bgl\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.954770 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-utilities\") pod \"redhat-operators-q4bgl\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.955048 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-utilities\") pod \"redhat-operators-q4bgl\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:37 crc kubenswrapper[4780]: I1205 08:32:37.975224 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk8sj\" (UniqueName: \"kubernetes.io/projected/4bb70562-14a1-447c-8ace-52d8e827693a-kube-api-access-zk8sj\") pod \"redhat-operators-q4bgl\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:38 crc kubenswrapper[4780]: I1205 08:32:38.105978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:38 crc kubenswrapper[4780]: I1205 08:32:38.578934 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q4bgl"] Dec 05 08:32:39 crc kubenswrapper[4780]: I1205 08:32:39.096353 4780 generic.go:334] "Generic (PLEG): container finished" podID="4bb70562-14a1-447c-8ace-52d8e827693a" containerID="ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9" exitCode=0 Dec 05 08:32:39 crc kubenswrapper[4780]: I1205 08:32:39.096662 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4bgl" event={"ID":"4bb70562-14a1-447c-8ace-52d8e827693a","Type":"ContainerDied","Data":"ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9"} Dec 05 08:32:39 crc kubenswrapper[4780]: I1205 08:32:39.096958 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4bgl" event={"ID":"4bb70562-14a1-447c-8ace-52d8e827693a","Type":"ContainerStarted","Data":"c659e025a73f0b86905aa6ade524bd4250d177ba41d29e37f257eda9aae40cfb"} Dec 05 08:32:39 crc kubenswrapper[4780]: I1205 08:32:39.099631 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 08:32:40 crc kubenswrapper[4780]: I1205 08:32:40.109977 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4bgl" event={"ID":"4bb70562-14a1-447c-8ace-52d8e827693a","Type":"ContainerStarted","Data":"02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc"} Dec 05 08:32:43 crc kubenswrapper[4780]: I1205 08:32:43.134634 4780 generic.go:334] "Generic (PLEG): container finished" podID="4bb70562-14a1-447c-8ace-52d8e827693a" containerID="02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc" exitCode=0 Dec 05 08:32:43 crc kubenswrapper[4780]: I1205 08:32:43.134732 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4bgl" event={"ID":"4bb70562-14a1-447c-8ace-52d8e827693a","Type":"ContainerDied","Data":"02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc"} Dec 05 08:32:45 crc kubenswrapper[4780]: I1205 08:32:45.159459 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4bgl" event={"ID":"4bb70562-14a1-447c-8ace-52d8e827693a","Type":"ContainerStarted","Data":"b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52"} Dec 05 08:32:45 crc kubenswrapper[4780]: I1205 08:32:45.181569 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q4bgl" podStartSLOduration=3.077574375 podStartE2EDuration="8.181546474s" podCreationTimestamp="2025-12-05 08:32:37 +0000 UTC" firstStartedPulling="2025-12-05 08:32:39.099385036 +0000 UTC m=+6393.168901368" lastFinishedPulling="2025-12-05 08:32:44.203357135 +0000 UTC m=+6398.272873467" observedRunningTime="2025-12-05 08:32:45.176709633 +0000 UTC m=+6399.246225985" watchObservedRunningTime="2025-12-05 08:32:45.181546474 +0000 UTC m=+6399.251062806" Dec 05 08:32:48 crc kubenswrapper[4780]: I1205 08:32:48.106522 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:48 crc kubenswrapper[4780]: I1205 08:32:48.106868 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:49 crc kubenswrapper[4780]: I1205 08:32:49.155490 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q4bgl" podUID="4bb70562-14a1-447c-8ace-52d8e827693a" containerName="registry-server" probeResult="failure" output=< Dec 05 08:32:49 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Dec 05 08:32:49 crc kubenswrapper[4780]: > Dec 05 08:32:58 crc kubenswrapper[4780]: I1205 08:32:58.152461 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:58 crc kubenswrapper[4780]: I1205 08:32:58.200200 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:58 crc kubenswrapper[4780]: I1205 08:32:58.389386 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q4bgl"] Dec 05 08:32:59 crc kubenswrapper[4780]: I1205 08:32:59.283652 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q4bgl" podUID="4bb70562-14a1-447c-8ace-52d8e827693a" containerName="registry-server" containerID="cri-o://b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52" gracePeriod=2 Dec 05 08:32:59 crc kubenswrapper[4780]: I1205 08:32:59.761493 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:32:59 crc kubenswrapper[4780]: I1205 08:32:59.977237 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-catalog-content\") pod \"4bb70562-14a1-447c-8ace-52d8e827693a\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " Dec 05 08:32:59 crc kubenswrapper[4780]: I1205 08:32:59.981043 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-utilities\") pod \"4bb70562-14a1-447c-8ace-52d8e827693a\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " Dec 05 08:32:59 crc kubenswrapper[4780]: I1205 08:32:59.981141 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk8sj\" (UniqueName: \"kubernetes.io/projected/4bb70562-14a1-447c-8ace-52d8e827693a-kube-api-access-zk8sj\") pod \"4bb70562-14a1-447c-8ace-52d8e827693a\" (UID: \"4bb70562-14a1-447c-8ace-52d8e827693a\") " Dec 05 08:32:59 crc kubenswrapper[4780]: I1205 08:32:59.981700 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-utilities" (OuterVolumeSpecName: "utilities") pod "4bb70562-14a1-447c-8ace-52d8e827693a" (UID: "4bb70562-14a1-447c-8ace-52d8e827693a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:32:59 crc kubenswrapper[4780]: I1205 08:32:59.981870 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:32:59 crc kubenswrapper[4780]: I1205 08:32:59.987802 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb70562-14a1-447c-8ace-52d8e827693a-kube-api-access-zk8sj" (OuterVolumeSpecName: "kube-api-access-zk8sj") pod "4bb70562-14a1-447c-8ace-52d8e827693a" (UID: "4bb70562-14a1-447c-8ace-52d8e827693a"). InnerVolumeSpecName "kube-api-access-zk8sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.083373 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bb70562-14a1-447c-8ace-52d8e827693a" (UID: "4bb70562-14a1-447c-8ace-52d8e827693a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.083479 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk8sj\" (UniqueName: \"kubernetes.io/projected/4bb70562-14a1-447c-8ace-52d8e827693a-kube-api-access-zk8sj\") on node \"crc\" DevicePath \"\"" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.185145 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bb70562-14a1-447c-8ace-52d8e827693a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.296905 4780 generic.go:334] "Generic (PLEG): container finished" podID="4bb70562-14a1-447c-8ace-52d8e827693a" containerID="b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52" exitCode=0 Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.296962 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4bgl" event={"ID":"4bb70562-14a1-447c-8ace-52d8e827693a","Type":"ContainerDied","Data":"b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52"} Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.296982 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q4bgl" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.296995 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q4bgl" event={"ID":"4bb70562-14a1-447c-8ace-52d8e827693a","Type":"ContainerDied","Data":"c659e025a73f0b86905aa6ade524bd4250d177ba41d29e37f257eda9aae40cfb"} Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.297019 4780 scope.go:117] "RemoveContainer" containerID="b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.328013 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q4bgl"] Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.332275 4780 scope.go:117] "RemoveContainer" containerID="02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.337812 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q4bgl"] Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.356245 4780 scope.go:117] "RemoveContainer" containerID="ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.411487 4780 scope.go:117] "RemoveContainer" containerID="b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52" Dec 05 08:33:00 crc kubenswrapper[4780]: E1205 08:33:00.412182 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52\": container with ID starting with b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52 not found: ID does not exist" containerID="b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.412234 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52"} err="failed to get container status \"b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52\": rpc error: code = NotFound desc = could not find container \"b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52\": container with ID starting with b97db645fcf1c1d8369b79d96eb68b4231bc7b90755cb41ec36716f8ffc0dd52 not found: ID does not exist" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.412266 4780 scope.go:117] "RemoveContainer" containerID="02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc" Dec 05 08:33:00 crc kubenswrapper[4780]: E1205 08:33:00.412669 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc\": container with ID starting with 02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc not found: ID does not exist" containerID="02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.412723 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc"} err="failed to get container status \"02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc\": rpc error: code = NotFound desc = could not find container \"02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc\": container with ID starting with 02c6a456bf6bf5843be84a62ea58caa9c4de517f0a80d7509ea01b9719c0cebc not found: ID does not exist" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.412750 4780 scope.go:117] "RemoveContainer" containerID="ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9" Dec 05 08:33:00 crc kubenswrapper[4780]: E1205 08:33:00.413307 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9\": container with ID starting with ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9 not found: ID does not exist" containerID="ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9" Dec 05 08:33:00 crc kubenswrapper[4780]: I1205 08:33:00.413340 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9"} err="failed to get container status \"ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9\": rpc error: code = NotFound desc = could not find container \"ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9\": container with ID starting with ea93bed470b9a347e4cfe68ba7a0deae49c59024f25e4c1f2c05a9066195e0e9 not found: ID does not exist" Dec 05 08:33:02 crc kubenswrapper[4780]: I1205 08:33:02.151963 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb70562-14a1-447c-8ace-52d8e827693a" path="/var/lib/kubelet/pods/4bb70562-14a1-447c-8ace-52d8e827693a/volumes" Dec 05 08:33:29 crc kubenswrapper[4780]: I1205 08:33:29.908515 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:33:29 crc kubenswrapper[4780]: I1205 08:33:29.909074 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:33:59 crc kubenswrapper[4780]: I1205 08:33:59.908414 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:33:59 crc kubenswrapper[4780]: I1205 08:33:59.909060 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:34:16 crc kubenswrapper[4780]: I1205 08:34:16.069965 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-9hbt2"] Dec 05 08:34:16 crc kubenswrapper[4780]: I1205 08:34:16.079451 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-9hbt2"] Dec 05 08:34:16 crc kubenswrapper[4780]: I1205 08:34:16.150827 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64cea429-d3a5-4fb5-a276-42c68329032c" path="/var/lib/kubelet/pods/64cea429-d3a5-4fb5-a276-42c68329032c/volumes" Dec 05 08:34:17 crc kubenswrapper[4780]: I1205 08:34:17.029537 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-6c66-account-create-update-kbcb2"] Dec 05 08:34:17 crc kubenswrapper[4780]: I1205 08:34:17.039851 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-6c66-account-create-update-kbcb2"] Dec 05 08:34:18 crc kubenswrapper[4780]: I1205 08:34:18.151858 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8deb9906-3ae2-4c58-a395-d87d0c8e1ca1" path="/var/lib/kubelet/pods/8deb9906-3ae2-4c58-a395-d87d0c8e1ca1/volumes" Dec 05 08:34:29 crc kubenswrapper[4780]: I1205 08:34:29.908439 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:34:29 crc kubenswrapper[4780]: I1205 08:34:29.908935 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:34:29 crc kubenswrapper[4780]: I1205 08:34:29.908971 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 08:34:29 crc kubenswrapper[4780]: I1205 08:34:29.909688 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e860808ce9ec2a0ff91f27b5dbad877e30f9e454f036cdc9e5827014863956f9"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 08:34:29 crc kubenswrapper[4780]: I1205 08:34:29.909738 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://e860808ce9ec2a0ff91f27b5dbad877e30f9e454f036cdc9e5827014863956f9" gracePeriod=600 Dec 05 08:34:30 crc kubenswrapper[4780]: I1205 08:34:30.170321 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="e860808ce9ec2a0ff91f27b5dbad877e30f9e454f036cdc9e5827014863956f9" exitCode=0 Dec 05 08:34:30 crc kubenswrapper[4780]: I1205 08:34:30.170651 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"e860808ce9ec2a0ff91f27b5dbad877e30f9e454f036cdc9e5827014863956f9"} Dec 05 08:34:30 crc kubenswrapper[4780]: I1205 08:34:30.170688 4780 scope.go:117] "RemoveContainer" containerID="9becb0cbfceaa90d939c56b2accf0ebdbc5af96f9566fb462e630a49d9e75690" Dec 05 08:34:31 crc kubenswrapper[4780]: I1205 08:34:31.188143 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533"} Dec 05 08:34:32 crc kubenswrapper[4780]: I1205 08:34:32.734028 4780 scope.go:117] "RemoveContainer" containerID="1a1215b904b69e2affd709432310e76e56f85d801c09755de2ed1649c04623ec" Dec 05 08:34:32 crc kubenswrapper[4780]: I1205 08:34:32.761590 4780 scope.go:117] "RemoveContainer" containerID="9e0bb0dd6f7c56c4c35c52fa1f348b0fb3ae58ce738c5e9bbd382b34fe0cdcf2" Dec 05 08:34:33 crc kubenswrapper[4780]: I1205 08:34:33.039604 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-mjjls"] Dec 05 08:34:33 crc kubenswrapper[4780]: I1205 08:34:33.048989 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-mjjls"] Dec 05 08:34:34 crc kubenswrapper[4780]: I1205 08:34:34.148956 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a024fc-06ca-42fa-8995-1de8cb2ce874" path="/var/lib/kubelet/pods/c5a024fc-06ca-42fa-8995-1de8cb2ce874/volumes" Dec 05 08:34:40 crc kubenswrapper[4780]: I1205 08:34:40.936955 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gk48d"] Dec 05 08:34:40 crc kubenswrapper[4780]: E1205 08:34:40.938157 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb70562-14a1-447c-8ace-52d8e827693a" containerName="registry-server" Dec 05 08:34:40 crc kubenswrapper[4780]: I1205 08:34:40.938176 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb70562-14a1-447c-8ace-52d8e827693a" containerName="registry-server" Dec 05 08:34:40 crc kubenswrapper[4780]: E1205 08:34:40.938192 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb70562-14a1-447c-8ace-52d8e827693a" containerName="extract-utilities" Dec 05 08:34:40 crc kubenswrapper[4780]: I1205 08:34:40.938200 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb70562-14a1-447c-8ace-52d8e827693a" containerName="extract-utilities" Dec 05 08:34:40 crc kubenswrapper[4780]: E1205 08:34:40.938262 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb70562-14a1-447c-8ace-52d8e827693a" containerName="extract-content" Dec 05 08:34:40 crc kubenswrapper[4780]: I1205 08:34:40.938273 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb70562-14a1-447c-8ace-52d8e827693a" containerName="extract-content" Dec 05 08:34:40 crc kubenswrapper[4780]: I1205 08:34:40.938550 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb70562-14a1-447c-8ace-52d8e827693a" containerName="registry-server" Dec 05 08:34:40 crc kubenswrapper[4780]: I1205 08:34:40.940447 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:40 crc kubenswrapper[4780]: I1205 08:34:40.945124 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3122616e-f363-4387-9683-be6ea9c09964-catalog-content\") pod \"certified-operators-gk48d\" (UID: \"3122616e-f363-4387-9683-be6ea9c09964\") " pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:40 crc kubenswrapper[4780]: I1205 08:34:40.945227 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3122616e-f363-4387-9683-be6ea9c09964-utilities\") pod \"certified-operators-gk48d\" (UID: \"3122616e-f363-4387-9683-be6ea9c09964\") " pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:40 crc kubenswrapper[4780]: I1205 08:34:40.945248 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m2nl\" (UniqueName: \"kubernetes.io/projected/3122616e-f363-4387-9683-be6ea9c09964-kube-api-access-2m2nl\") pod \"certified-operators-gk48d\" (UID: \"3122616e-f363-4387-9683-be6ea9c09964\") " pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:40 crc kubenswrapper[4780]: I1205 08:34:40.950942 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gk48d"] Dec 05 08:34:41 crc kubenswrapper[4780]: I1205 08:34:41.046432 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3122616e-f363-4387-9683-be6ea9c09964-catalog-content\") pod \"certified-operators-gk48d\" (UID: \"3122616e-f363-4387-9683-be6ea9c09964\") " pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:41 crc kubenswrapper[4780]: I1205 08:34:41.046542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3122616e-f363-4387-9683-be6ea9c09964-utilities\") pod \"certified-operators-gk48d\" (UID: \"3122616e-f363-4387-9683-be6ea9c09964\") " pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:41 crc kubenswrapper[4780]: I1205 08:34:41.046559 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m2nl\" (UniqueName: \"kubernetes.io/projected/3122616e-f363-4387-9683-be6ea9c09964-kube-api-access-2m2nl\") pod \"certified-operators-gk48d\" (UID: \"3122616e-f363-4387-9683-be6ea9c09964\") " pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:41 crc kubenswrapper[4780]: I1205 08:34:41.046941 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3122616e-f363-4387-9683-be6ea9c09964-catalog-content\") pod \"certified-operators-gk48d\" (UID: \"3122616e-f363-4387-9683-be6ea9c09964\") " pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:41 crc kubenswrapper[4780]: I1205 08:34:41.047034 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3122616e-f363-4387-9683-be6ea9c09964-utilities\") pod \"certified-operators-gk48d\" (UID: \"3122616e-f363-4387-9683-be6ea9c09964\") " pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:41 crc kubenswrapper[4780]: I1205 08:34:41.066933 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m2nl\" (UniqueName: \"kubernetes.io/projected/3122616e-f363-4387-9683-be6ea9c09964-kube-api-access-2m2nl\") pod \"certified-operators-gk48d\" (UID: \"3122616e-f363-4387-9683-be6ea9c09964\") " pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:41 crc kubenswrapper[4780]: I1205 08:34:41.313272 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:34:41 crc kubenswrapper[4780]: I1205 08:34:41.838593 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gk48d"] Dec 05 08:34:42 crc kubenswrapper[4780]: E1205 08:34:42.280117 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3122616e_f363_4387_9683_be6ea9c09964.slice/crio-conmon-8fc07292dcb7e0f0ab5e86e165c66d99d386fe68da4f6b245b0e1bb3711a2842.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3122616e_f363_4387_9683_be6ea9c09964.slice/crio-8fc07292dcb7e0f0ab5e86e165c66d99d386fe68da4f6b245b0e1bb3711a2842.scope\": RecentStats: unable to find data in memory cache]" Dec 05 08:34:42 crc kubenswrapper[4780]: I1205 08:34:42.292348 4780 generic.go:334] "Generic (PLEG): container finished" podID="3122616e-f363-4387-9683-be6ea9c09964" containerID="8fc07292dcb7e0f0ab5e86e165c66d99d386fe68da4f6b245b0e1bb3711a2842" exitCode=0 Dec 05 08:34:42 crc kubenswrapper[4780]: I1205 08:34:42.292398 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk48d" event={"ID":"3122616e-f363-4387-9683-be6ea9c09964","Type":"ContainerDied","Data":"8fc07292dcb7e0f0ab5e86e165c66d99d386fe68da4f6b245b0e1bb3711a2842"} Dec 05 08:34:42 crc kubenswrapper[4780]: I1205 08:34:42.292423 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk48d" event={"ID":"3122616e-f363-4387-9683-be6ea9c09964","Type":"ContainerStarted","Data":"6964fd1335fe4b5acefe4e11c8b90277d084d05c29cfb63aadb34d0fc3fc225d"} Dec 05 08:34:51 crc kubenswrapper[4780]: I1205 08:34:51.370053 4780 generic.go:334] "Generic (PLEG): container finished" podID="3122616e-f363-4387-9683-be6ea9c09964" containerID="b6095340c2a1d0850832ad0ead2a00ff3f3b8768ed2e29ea923756992e17be5d" exitCode=0 Dec 05 08:34:51 crc kubenswrapper[4780]: I1205 08:34:51.370133 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk48d" event={"ID":"3122616e-f363-4387-9683-be6ea9c09964","Type":"ContainerDied","Data":"b6095340c2a1d0850832ad0ead2a00ff3f3b8768ed2e29ea923756992e17be5d"} Dec 05 08:34:52 crc kubenswrapper[4780]: I1205 08:34:52.380563 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk48d" event={"ID":"3122616e-f363-4387-9683-be6ea9c09964","Type":"ContainerStarted","Data":"bfbfa7ee4af069ccd805fc1df0ee423a08867580199f7dbd91a7075c57a2db07"} Dec 05 08:34:52 crc kubenswrapper[4780]: I1205 08:34:52.410041 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gk48d" podStartSLOduration=2.8302873120000003 podStartE2EDuration="12.410020395s" podCreationTimestamp="2025-12-05 08:34:40 +0000 UTC" firstStartedPulling="2025-12-05 08:34:42.295465565 +0000 UTC m=+6516.364981897" lastFinishedPulling="2025-12-05 08:34:51.875198648 +0000 UTC m=+6525.944714980" observedRunningTime="2025-12-05 08:34:52.402718737 +0000 UTC m=+6526.472235079" watchObservedRunningTime="2025-12-05 08:34:52.410020395 +0000 UTC m=+6526.479536727" Dec 05 08:35:01 crc kubenswrapper[4780]: I1205 08:35:01.313641 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:35:01 crc kubenswrapper[4780]: I1205 08:35:01.314302 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:35:01 crc kubenswrapper[4780]: I1205 08:35:01.359582 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:35:01 crc kubenswrapper[4780]: I1205 08:35:01.504873 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gk48d" Dec 05 08:35:08 crc kubenswrapper[4780]: I1205 08:35:08.346607 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gk48d"] Dec 05 08:35:08 crc kubenswrapper[4780]: I1205 08:35:08.938588 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hhspt"] Dec 05 08:35:08 crc kubenswrapper[4780]: I1205 08:35:08.976659 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hhspt" podUID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerName="registry-server" containerID="cri-o://dcc2d6afaa59afd24286d4206eb3b22406caed42971ee36a35927f9226af4abd" gracePeriod=2 Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.539490 4780 generic.go:334] "Generic (PLEG): container finished" podID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerID="dcc2d6afaa59afd24286d4206eb3b22406caed42971ee36a35927f9226af4abd" exitCode=0 Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.539556 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhspt" event={"ID":"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c","Type":"ContainerDied","Data":"dcc2d6afaa59afd24286d4206eb3b22406caed42971ee36a35927f9226af4abd"} Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.541236 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhspt" event={"ID":"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c","Type":"ContainerDied","Data":"5c1c7a8ff466f7e8237b5cd96b0cec035fe4c08282bb9d14bfd3f4b005b28550"} Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.541345 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1c7a8ff466f7e8237b5cd96b0cec035fe4c08282bb9d14bfd3f4b005b28550" Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.613669 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhspt" Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.752035 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8kv7\" (UniqueName: \"kubernetes.io/projected/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-kube-api-access-v8kv7\") pod \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.752377 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-utilities\") pod \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.752821 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-catalog-content\") pod \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\" (UID: \"f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c\") " Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.755280 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-utilities" (OuterVolumeSpecName: "utilities") pod "f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" (UID: "f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.768214 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-kube-api-access-v8kv7" (OuterVolumeSpecName: "kube-api-access-v8kv7") pod "f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" (UID: "f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c"). InnerVolumeSpecName "kube-api-access-v8kv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.842268 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" (UID: "f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.854826 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.855247 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8kv7\" (UniqueName: \"kubernetes.io/projected/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-kube-api-access-v8kv7\") on node \"crc\" DevicePath \"\"" Dec 05 08:35:09 crc kubenswrapper[4780]: I1205 08:35:09.855352 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:35:10 crc kubenswrapper[4780]: I1205 08:35:10.548540 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhspt" Dec 05 08:35:10 crc kubenswrapper[4780]: I1205 08:35:10.569925 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hhspt"] Dec 05 08:35:10 crc kubenswrapper[4780]: I1205 08:35:10.578573 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hhspt"] Dec 05 08:35:12 crc kubenswrapper[4780]: I1205 08:35:12.151243 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" path="/var/lib/kubelet/pods/f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c/volumes" Dec 05 08:35:32 crc kubenswrapper[4780]: I1205 08:35:32.855235 4780 scope.go:117] "RemoveContainer" containerID="5c0f6bfe1777ec984a944e3d91b0b492d7ff461753608fcdb5109c08980b6bc3" Dec 05 08:35:32 crc kubenswrapper[4780]: I1205 08:35:32.889188 4780 scope.go:117] "RemoveContainer" containerID="35546a18a6283e9d7b971f3f39c628198a51df9404c5f48a6c7b1780cbee965e" Dec 05 08:35:32 crc kubenswrapper[4780]: I1205 08:35:32.938385 4780 scope.go:117] "RemoveContainer" containerID="dcc2d6afaa59afd24286d4206eb3b22406caed42971ee36a35927f9226af4abd" Dec 05 08:35:32 crc kubenswrapper[4780]: I1205 08:35:32.983687 4780 scope.go:117] "RemoveContainer" containerID="55e3b6ed73686189e267b8b14e661a753be25aaded732c7d28721ac888afe4f1" Dec 05 08:36:59 crc kubenswrapper[4780]: I1205 08:36:59.908036 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:36:59 crc kubenswrapper[4780]: I1205 08:36:59.908538 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:37:13 crc kubenswrapper[4780]: I1205 08:37:13.041900 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-7vfx9"] Dec 05 08:37:13 crc kubenswrapper[4780]: I1205 08:37:13.054914 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-fd3b-account-create-update-2m8qv"] Dec 05 08:37:13 crc kubenswrapper[4780]: I1205 08:37:13.066127 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-7vfx9"] Dec 05 08:37:13 crc kubenswrapper[4780]: I1205 08:37:13.075228 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-fd3b-account-create-update-2m8qv"] Dec 05 08:37:14 crc kubenswrapper[4780]: I1205 08:37:14.151004 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e480000-a70a-4680-a18d-b7b79381be78" path="/var/lib/kubelet/pods/2e480000-a70a-4680-a18d-b7b79381be78/volumes" Dec 05 08:37:14 crc kubenswrapper[4780]: I1205 08:37:14.152036 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49dc842-7c79-4e32-9caa-0ba8079ba6d5" path="/var/lib/kubelet/pods/a49dc842-7c79-4e32-9caa-0ba8079ba6d5/volumes" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.713664 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g5bvt"] Dec 05 08:37:28 crc kubenswrapper[4780]: E1205 08:37:28.714827 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerName="extract-content" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.714841 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerName="extract-content" Dec 05 08:37:28 crc kubenswrapper[4780]: E1205 08:37:28.714858 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerName="extract-utilities" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.714867 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerName="extract-utilities" Dec 05 08:37:28 crc kubenswrapper[4780]: E1205 08:37:28.714909 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerName="registry-server" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.714915 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerName="registry-server" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.715111 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e0dac3-e166-4bd6-88c1-af5d7ffe7f8c" containerName="registry-server" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.716817 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.730080 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5bvt"] Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.869195 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p554\" (UniqueName: \"kubernetes.io/projected/be590067-4a13-4b32-8ac4-aac7696e1762-kube-api-access-5p554\") pod \"community-operators-g5bvt\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.869589 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-catalog-content\") pod \"community-operators-g5bvt\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.870055 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-utilities\") pod \"community-operators-g5bvt\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.906760 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-78mcf"] Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.916978 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.926709 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78mcf"] Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.972033 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p554\" (UniqueName: \"kubernetes.io/projected/be590067-4a13-4b32-8ac4-aac7696e1762-kube-api-access-5p554\") pod \"community-operators-g5bvt\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.972092 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-catalog-content\") pod \"community-operators-g5bvt\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.972271 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-utilities\") pod \"community-operators-g5bvt\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.972864 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-utilities\") pod \"community-operators-g5bvt\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.972927 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-catalog-content\") pod \"community-operators-g5bvt\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:28 crc kubenswrapper[4780]: I1205 08:37:28.993122 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p554\" (UniqueName: \"kubernetes.io/projected/be590067-4a13-4b32-8ac4-aac7696e1762-kube-api-access-5p554\") pod \"community-operators-g5bvt\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.038247 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.074653 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-utilities\") pod \"redhat-marketplace-78mcf\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.074840 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljkwh\" (UniqueName: \"kubernetes.io/projected/c3a28222-a43d-4728-9b13-4ddf4ef66375-kube-api-access-ljkwh\") pod \"redhat-marketplace-78mcf\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.074980 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-catalog-content\") pod \"redhat-marketplace-78mcf\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.176409 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-utilities\") pod \"redhat-marketplace-78mcf\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.177220 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-utilities\") pod \"redhat-marketplace-78mcf\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.177606 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljkwh\" (UniqueName: \"kubernetes.io/projected/c3a28222-a43d-4728-9b13-4ddf4ef66375-kube-api-access-ljkwh\") pod \"redhat-marketplace-78mcf\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.178064 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-catalog-content\") pod \"redhat-marketplace-78mcf\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.178384 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-catalog-content\") pod \"redhat-marketplace-78mcf\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.197779 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljkwh\" (UniqueName: \"kubernetes.io/projected/c3a28222-a43d-4728-9b13-4ddf4ef66375-kube-api-access-ljkwh\") pod \"redhat-marketplace-78mcf\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.264857 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.690750 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5bvt"] Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.795820 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5bvt" event={"ID":"be590067-4a13-4b32-8ac4-aac7696e1762","Type":"ContainerStarted","Data":"b9015e781f605cf343942d53ba3de56de9289fc3b01eeac3bc10a3f6607b0e55"} Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.838150 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78mcf"] Dec 05 08:37:29 crc kubenswrapper[4780]: W1205 08:37:29.844800 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3a28222_a43d_4728_9b13_4ddf4ef66375.slice/crio-2e1feaf0e0c760cb2783f91db366c8ae53b415dd3463598598f90781d44726c9 WatchSource:0}: Error finding container 2e1feaf0e0c760cb2783f91db366c8ae53b415dd3463598598f90781d44726c9: Status 404 returned error can't find the container with id 2e1feaf0e0c760cb2783f91db366c8ae53b415dd3463598598f90781d44726c9 Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.907996 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:37:29 crc kubenswrapper[4780]: I1205 08:37:29.908061 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:37:30 crc kubenswrapper[4780]: I1205 08:37:30.039968 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-5ck2d"] Dec 05 08:37:30 crc kubenswrapper[4780]: I1205 08:37:30.048150 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-5ck2d"] Dec 05 08:37:30 crc kubenswrapper[4780]: I1205 08:37:30.153516 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a" path="/var/lib/kubelet/pods/0ba7ad4d-5bb1-47e4-a0c9-5ac62416104a/volumes" Dec 05 08:37:30 crc kubenswrapper[4780]: I1205 08:37:30.807550 4780 generic.go:334] "Generic (PLEG): container finished" podID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerID="39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc" exitCode=0 Dec 05 08:37:30 crc kubenswrapper[4780]: I1205 08:37:30.807663 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78mcf" event={"ID":"c3a28222-a43d-4728-9b13-4ddf4ef66375","Type":"ContainerDied","Data":"39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc"} Dec 05 08:37:30 crc kubenswrapper[4780]: I1205 08:37:30.807722 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78mcf" event={"ID":"c3a28222-a43d-4728-9b13-4ddf4ef66375","Type":"ContainerStarted","Data":"2e1feaf0e0c760cb2783f91db366c8ae53b415dd3463598598f90781d44726c9"} Dec 05 08:37:30 crc kubenswrapper[4780]: I1205 08:37:30.810430 4780 generic.go:334] "Generic (PLEG): container finished" podID="be590067-4a13-4b32-8ac4-aac7696e1762" containerID="3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611" exitCode=0 Dec 05 08:37:30 crc kubenswrapper[4780]: I1205 08:37:30.810471 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5bvt" event={"ID":"be590067-4a13-4b32-8ac4-aac7696e1762","Type":"ContainerDied","Data":"3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611"} Dec 05 08:37:32 crc kubenswrapper[4780]: I1205 08:37:32.882357 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78mcf" event={"ID":"c3a28222-a43d-4728-9b13-4ddf4ef66375","Type":"ContainerStarted","Data":"500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4"} Dec 05 08:37:32 crc kubenswrapper[4780]: I1205 08:37:32.886371 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5bvt" event={"ID":"be590067-4a13-4b32-8ac4-aac7696e1762","Type":"ContainerStarted","Data":"66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b"} Dec 05 08:37:33 crc kubenswrapper[4780]: I1205 08:37:33.111051 4780 scope.go:117] "RemoveContainer" containerID="acf2003408884b3383ccc2d8f0107dc330bbbec1230dc7bdd4e16f673a3931d4" Dec 05 08:37:33 crc kubenswrapper[4780]: I1205 08:37:33.182480 4780 scope.go:117] "RemoveContainer" containerID="bff043084d2c36c6263603443da68b5664584a8fa1411de8aa66360852ddf92a" Dec 05 08:37:33 crc kubenswrapper[4780]: I1205 08:37:33.245317 4780 scope.go:117] "RemoveContainer" containerID="3dd30f043b29082812fe9001096f8f53694448f2be09c9042edf8f78b00bd2ed" Dec 05 08:37:33 crc kubenswrapper[4780]: I1205 08:37:33.895928 4780 generic.go:334] "Generic (PLEG): container finished" podID="be590067-4a13-4b32-8ac4-aac7696e1762" containerID="66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b" exitCode=0 Dec 05 08:37:33 crc kubenswrapper[4780]: I1205 08:37:33.896007 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5bvt" event={"ID":"be590067-4a13-4b32-8ac4-aac7696e1762","Type":"ContainerDied","Data":"66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b"} Dec 05 08:37:33 crc kubenswrapper[4780]: I1205 08:37:33.901181 4780 generic.go:334] "Generic (PLEG): container finished" podID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerID="500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4" exitCode=0 Dec 05 08:37:33 crc kubenswrapper[4780]: I1205 08:37:33.901240 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78mcf" event={"ID":"c3a28222-a43d-4728-9b13-4ddf4ef66375","Type":"ContainerDied","Data":"500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4"} Dec 05 08:37:34 crc kubenswrapper[4780]: I1205 08:37:34.913395 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78mcf" event={"ID":"c3a28222-a43d-4728-9b13-4ddf4ef66375","Type":"ContainerStarted","Data":"b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95"} Dec 05 08:37:34 crc kubenswrapper[4780]: I1205 08:37:34.918350 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5bvt" event={"ID":"be590067-4a13-4b32-8ac4-aac7696e1762","Type":"ContainerStarted","Data":"bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16"} Dec 05 08:37:34 crc kubenswrapper[4780]: I1205 08:37:34.935819 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-78mcf" podStartSLOduration=3.405978694 podStartE2EDuration="6.935800473s" podCreationTimestamp="2025-12-05 08:37:28 +0000 UTC" firstStartedPulling="2025-12-05 08:37:30.809319572 +0000 UTC m=+6684.878835904" lastFinishedPulling="2025-12-05 08:37:34.339141351 +0000 UTC m=+6688.408657683" observedRunningTime="2025-12-05 08:37:34.934640161 +0000 UTC m=+6689.004156483" watchObservedRunningTime="2025-12-05 08:37:34.935800473 +0000 UTC m=+6689.005316805" Dec 05 08:37:34 crc kubenswrapper[4780]: I1205 08:37:34.953778 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g5bvt" podStartSLOduration=3.5066063769999998 podStartE2EDuration="6.953761738s" podCreationTimestamp="2025-12-05 08:37:28 +0000 UTC" firstStartedPulling="2025-12-05 08:37:30.812219481 +0000 UTC m=+6684.881735813" lastFinishedPulling="2025-12-05 08:37:34.259374842 +0000 UTC m=+6688.328891174" observedRunningTime="2025-12-05 08:37:34.951031704 +0000 UTC m=+6689.020548036" watchObservedRunningTime="2025-12-05 08:37:34.953761738 +0000 UTC m=+6689.023278070" Dec 05 08:37:39 crc kubenswrapper[4780]: I1205 08:37:39.039768 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:39 crc kubenswrapper[4780]: I1205 08:37:39.040119 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:39 crc kubenswrapper[4780]: I1205 08:37:39.093702 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:39 crc kubenswrapper[4780]: I1205 08:37:39.266154 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:39 crc kubenswrapper[4780]: I1205 08:37:39.266202 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:39 crc kubenswrapper[4780]: I1205 08:37:39.318970 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:40 crc kubenswrapper[4780]: I1205 08:37:40.014444 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:40 crc kubenswrapper[4780]: I1205 08:37:40.014619 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:41 crc kubenswrapper[4780]: I1205 08:37:41.701035 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-78mcf"] Dec 05 08:37:41 crc kubenswrapper[4780]: I1205 08:37:41.981431 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-78mcf" podUID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerName="registry-server" containerID="cri-o://b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95" gracePeriod=2 Dec 05 08:37:42 crc kubenswrapper[4780]: I1205 08:37:42.934794 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:42 crc kubenswrapper[4780]: I1205 08:37:42.991361 4780 generic.go:334] "Generic (PLEG): container finished" podID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerID="b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95" exitCode=0 Dec 05 08:37:42 crc kubenswrapper[4780]: I1205 08:37:42.991408 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78mcf" Dec 05 08:37:42 crc kubenswrapper[4780]: I1205 08:37:42.991407 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78mcf" event={"ID":"c3a28222-a43d-4728-9b13-4ddf4ef66375","Type":"ContainerDied","Data":"b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95"} Dec 05 08:37:42 crc kubenswrapper[4780]: I1205 08:37:42.991885 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78mcf" event={"ID":"c3a28222-a43d-4728-9b13-4ddf4ef66375","Type":"ContainerDied","Data":"2e1feaf0e0c760cb2783f91db366c8ae53b415dd3463598598f90781d44726c9"} Dec 05 08:37:42 crc kubenswrapper[4780]: I1205 08:37:42.991902 4780 scope.go:117] "RemoveContainer" containerID="b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.018518 4780 scope.go:117] "RemoveContainer" containerID="500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.041469 4780 scope.go:117] "RemoveContainer" containerID="39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.094610 4780 scope.go:117] "RemoveContainer" containerID="b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95" Dec 05 08:37:43 crc kubenswrapper[4780]: E1205 08:37:43.095099 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95\": container with ID starting with b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95 not found: ID does not exist" containerID="b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.095129 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95"} err="failed to get container status \"b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95\": rpc error: code = NotFound desc = could not find container \"b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95\": container with ID starting with b146b32e84326c61fde9c3ac8e39a66df261417b1a0f479e46d4d3fec56dfc95 not found: ID does not exist" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.095149 4780 scope.go:117] "RemoveContainer" containerID="500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4" Dec 05 08:37:43 crc kubenswrapper[4780]: E1205 08:37:43.095467 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4\": container with ID starting with 500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4 not found: ID does not exist" containerID="500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.095511 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4"} err="failed to get container status \"500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4\": rpc error: code = NotFound desc = could not find container \"500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4\": container with ID starting with 500b2b8b8c9c2093e2c2a0c64c5d000caa2158576d2506e59807363d2ef883b4 not found: ID does not exist" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.095544 4780 scope.go:117] "RemoveContainer" containerID="39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc" Dec 05 08:37:43 crc kubenswrapper[4780]: E1205 08:37:43.096006 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc\": container with ID starting with 39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc not found: ID does not exist" containerID="39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.096059 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc"} err="failed to get container status \"39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc\": rpc error: code = NotFound desc = could not find container \"39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc\": container with ID starting with 39f6c7af873f2cd5fa9de703e4bc75a0720bc771035470bb34b35fbc1e4fb7dc not found: ID does not exist" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.099851 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljkwh\" (UniqueName: \"kubernetes.io/projected/c3a28222-a43d-4728-9b13-4ddf4ef66375-kube-api-access-ljkwh\") pod \"c3a28222-a43d-4728-9b13-4ddf4ef66375\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.099928 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-utilities\") pod \"c3a28222-a43d-4728-9b13-4ddf4ef66375\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.100078 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-catalog-content\") pod \"c3a28222-a43d-4728-9b13-4ddf4ef66375\" (UID: \"c3a28222-a43d-4728-9b13-4ddf4ef66375\") " Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.101773 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-utilities" (OuterVolumeSpecName: "utilities") pod "c3a28222-a43d-4728-9b13-4ddf4ef66375" (UID: "c3a28222-a43d-4728-9b13-4ddf4ef66375"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.105994 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a28222-a43d-4728-9b13-4ddf4ef66375-kube-api-access-ljkwh" (OuterVolumeSpecName: "kube-api-access-ljkwh") pod "c3a28222-a43d-4728-9b13-4ddf4ef66375" (UID: "c3a28222-a43d-4728-9b13-4ddf4ef66375"). InnerVolumeSpecName "kube-api-access-ljkwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.118319 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3a28222-a43d-4728-9b13-4ddf4ef66375" (UID: "c3a28222-a43d-4728-9b13-4ddf4ef66375"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.202426 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljkwh\" (UniqueName: \"kubernetes.io/projected/c3a28222-a43d-4728-9b13-4ddf4ef66375-kube-api-access-ljkwh\") on node \"crc\" DevicePath \"\"" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.202471 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.202484 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3a28222-a43d-4728-9b13-4ddf4ef66375-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.326609 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-78mcf"] Dec 05 08:37:43 crc kubenswrapper[4780]: I1205 08:37:43.335170 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-78mcf"] Dec 05 08:37:44 crc kubenswrapper[4780]: I1205 08:37:44.154219 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a28222-a43d-4728-9b13-4ddf4ef66375" path="/var/lib/kubelet/pods/c3a28222-a43d-4728-9b13-4ddf4ef66375/volumes" Dec 05 08:37:44 crc kubenswrapper[4780]: I1205 08:37:44.306026 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5bvt"] Dec 05 08:37:44 crc kubenswrapper[4780]: I1205 08:37:44.306277 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g5bvt" podUID="be590067-4a13-4b32-8ac4-aac7696e1762" containerName="registry-server" containerID="cri-o://bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16" gracePeriod=2 Dec 05 08:37:44 crc kubenswrapper[4780]: I1205 08:37:44.797383 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:44 crc kubenswrapper[4780]: I1205 08:37:44.936154 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p554\" (UniqueName: \"kubernetes.io/projected/be590067-4a13-4b32-8ac4-aac7696e1762-kube-api-access-5p554\") pod \"be590067-4a13-4b32-8ac4-aac7696e1762\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " Dec 05 08:37:44 crc kubenswrapper[4780]: I1205 08:37:44.936204 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-catalog-content\") pod \"be590067-4a13-4b32-8ac4-aac7696e1762\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " Dec 05 08:37:44 crc kubenswrapper[4780]: I1205 08:37:44.936441 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-utilities\") pod \"be590067-4a13-4b32-8ac4-aac7696e1762\" (UID: \"be590067-4a13-4b32-8ac4-aac7696e1762\") " Dec 05 08:37:44 crc kubenswrapper[4780]: I1205 08:37:44.937191 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-utilities" (OuterVolumeSpecName: "utilities") pod "be590067-4a13-4b32-8ac4-aac7696e1762" (UID: "be590067-4a13-4b32-8ac4-aac7696e1762"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:37:44 crc kubenswrapper[4780]: I1205 08:37:44.941033 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be590067-4a13-4b32-8ac4-aac7696e1762-kube-api-access-5p554" (OuterVolumeSpecName: "kube-api-access-5p554") pod "be590067-4a13-4b32-8ac4-aac7696e1762" (UID: "be590067-4a13-4b32-8ac4-aac7696e1762"). InnerVolumeSpecName "kube-api-access-5p554". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:37:44 crc kubenswrapper[4780]: I1205 08:37:44.984454 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be590067-4a13-4b32-8ac4-aac7696e1762" (UID: "be590067-4a13-4b32-8ac4-aac7696e1762"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.009998 4780 generic.go:334] "Generic (PLEG): container finished" podID="be590067-4a13-4b32-8ac4-aac7696e1762" containerID="bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16" exitCode=0 Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.010050 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5bvt" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.010061 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5bvt" event={"ID":"be590067-4a13-4b32-8ac4-aac7696e1762","Type":"ContainerDied","Data":"bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16"} Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.010091 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5bvt" event={"ID":"be590067-4a13-4b32-8ac4-aac7696e1762","Type":"ContainerDied","Data":"b9015e781f605cf343942d53ba3de56de9289fc3b01eeac3bc10a3f6607b0e55"} Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.010106 4780 scope.go:117] "RemoveContainer" containerID="bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.040347 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p554\" (UniqueName: \"kubernetes.io/projected/be590067-4a13-4b32-8ac4-aac7696e1762-kube-api-access-5p554\") on node \"crc\" DevicePath \"\"" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.040429 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.040446 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be590067-4a13-4b32-8ac4-aac7696e1762-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.041432 4780 scope.go:117] "RemoveContainer" containerID="66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.045210 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5bvt"] Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.055031 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g5bvt"] Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.072184 4780 scope.go:117] "RemoveContainer" containerID="3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.115855 4780 scope.go:117] "RemoveContainer" containerID="bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16" Dec 05 08:37:45 crc kubenswrapper[4780]: E1205 08:37:45.116310 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16\": container with ID starting with bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16 not found: ID does not exist" containerID="bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.116344 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16"} err="failed to get container status \"bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16\": rpc error: code = NotFound desc = could not find container \"bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16\": container with ID starting with bf4d77cfbaeab536d7c3a303b8d7d3935ba9dabced6f0949fd3ed90df6825c16 not found: ID does not exist" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.116365 4780 scope.go:117] "RemoveContainer" containerID="66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b" Dec 05 08:37:45 crc kubenswrapper[4780]: E1205 08:37:45.116708 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b\": container with ID starting with 66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b not found: ID does not exist" containerID="66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.116727 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b"} err="failed to get container status \"66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b\": rpc error: code = NotFound desc = could not find container \"66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b\": container with ID starting with 66380cb057647b23e7452ed6ae6dae5db6d7c9f1efba9e7cffb4940d9963284b not found: ID does not exist" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.116739 4780 scope.go:117] "RemoveContainer" containerID="3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611" Dec 05 08:37:45 crc kubenswrapper[4780]: E1205 08:37:45.117379 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611\": container with ID starting with 3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611 not found: ID does not exist" containerID="3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611" Dec 05 08:37:45 crc kubenswrapper[4780]: I1205 08:37:45.117408 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611"} err="failed to get container status \"3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611\": rpc error: code = NotFound desc = could not find container \"3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611\": container with ID starting with 3043a63c0a5f2d1036a271c0d7c0718eb1da716ffc59901a988fd565040ea611 not found: ID does not exist" Dec 05 08:37:46 crc kubenswrapper[4780]: I1205 08:37:46.151414 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be590067-4a13-4b32-8ac4-aac7696e1762" path="/var/lib/kubelet/pods/be590067-4a13-4b32-8ac4-aac7696e1762/volumes" Dec 05 08:37:59 crc kubenswrapper[4780]: I1205 08:37:59.908090 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:37:59 crc kubenswrapper[4780]: I1205 08:37:59.908568 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:37:59 crc kubenswrapper[4780]: I1205 08:37:59.908616 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 08:37:59 crc kubenswrapper[4780]: I1205 08:37:59.909462 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 08:37:59 crc kubenswrapper[4780]: I1205 08:37:59.909516 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" gracePeriod=600 Dec 05 08:38:00 crc kubenswrapper[4780]: I1205 08:38:00.150098 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" exitCode=0 Dec 05 08:38:00 crc kubenswrapper[4780]: I1205 08:38:00.156520 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533"} Dec 05 08:38:00 crc kubenswrapper[4780]: I1205 08:38:00.156585 4780 scope.go:117] "RemoveContainer" containerID="e860808ce9ec2a0ff91f27b5dbad877e30f9e454f036cdc9e5827014863956f9" Dec 05 08:38:00 crc kubenswrapper[4780]: E1205 08:38:00.555934 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:38:01 crc kubenswrapper[4780]: I1205 08:38:01.162469 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:38:01 crc kubenswrapper[4780]: E1205 08:38:01.162903 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:38:16 crc kubenswrapper[4780]: I1205 08:38:16.145727 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:38:16 crc kubenswrapper[4780]: E1205 08:38:16.147896 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:38:27 crc kubenswrapper[4780]: I1205 08:38:27.139115 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:38:27 crc kubenswrapper[4780]: E1205 08:38:27.140064 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:38:39 crc kubenswrapper[4780]: I1205 08:38:39.139656 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:38:39 crc kubenswrapper[4780]: E1205 08:38:39.140479 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:38:53 crc kubenswrapper[4780]: I1205 08:38:53.139960 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:38:53 crc kubenswrapper[4780]: E1205 08:38:53.140835 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:39:06 crc kubenswrapper[4780]: I1205 08:39:06.138903 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:39:06 crc kubenswrapper[4780]: E1205 08:39:06.140138 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:39:17 crc kubenswrapper[4780]: I1205 08:39:17.138347 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:39:17 crc kubenswrapper[4780]: E1205 08:39:17.139097 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:39:23 crc kubenswrapper[4780]: I1205 08:39:23.064535 4780 generic.go:334] "Generic (PLEG): container finished" podID="47fba3c8-cdfe-4395-a921-933521a08de8" containerID="08bd81237a1bc1b7e10e127abe295c189dfbdd17a53c1795ceed26e1d2cdaaaa" exitCode=0 Dec 05 08:39:23 crc kubenswrapper[4780]: I1205 08:39:23.064618 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" event={"ID":"47fba3c8-cdfe-4395-a921-933521a08de8","Type":"ContainerDied","Data":"08bd81237a1bc1b7e10e127abe295c189dfbdd17a53c1795ceed26e1d2cdaaaa"} Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.508708 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.672269 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv6d2\" (UniqueName: \"kubernetes.io/projected/47fba3c8-cdfe-4395-a921-933521a08de8-kube-api-access-zv6d2\") pod \"47fba3c8-cdfe-4395-a921-933521a08de8\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.672761 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-ssh-key\") pod \"47fba3c8-cdfe-4395-a921-933521a08de8\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.672806 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-tripleo-cleanup-combined-ca-bundle\") pod \"47fba3c8-cdfe-4395-a921-933521a08de8\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.672915 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-inventory\") pod \"47fba3c8-cdfe-4395-a921-933521a08de8\" (UID: \"47fba3c8-cdfe-4395-a921-933521a08de8\") " Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.680035 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47fba3c8-cdfe-4395-a921-933521a08de8-kube-api-access-zv6d2" (OuterVolumeSpecName: "kube-api-access-zv6d2") pod "47fba3c8-cdfe-4395-a921-933521a08de8" (UID: "47fba3c8-cdfe-4395-a921-933521a08de8"). InnerVolumeSpecName "kube-api-access-zv6d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.680160 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "47fba3c8-cdfe-4395-a921-933521a08de8" (UID: "47fba3c8-cdfe-4395-a921-933521a08de8"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.708147 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-inventory" (OuterVolumeSpecName: "inventory") pod "47fba3c8-cdfe-4395-a921-933521a08de8" (UID: "47fba3c8-cdfe-4395-a921-933521a08de8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.719625 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "47fba3c8-cdfe-4395-a921-933521a08de8" (UID: "47fba3c8-cdfe-4395-a921-933521a08de8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.775637 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv6d2\" (UniqueName: \"kubernetes.io/projected/47fba3c8-cdfe-4395-a921-933521a08de8-kube-api-access-zv6d2\") on node \"crc\" DevicePath \"\"" Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.775683 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.775698 4780 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:39:24 crc kubenswrapper[4780]: I1205 08:39:24.775715 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47fba3c8-cdfe-4395-a921-933521a08de8-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:39:25 crc kubenswrapper[4780]: I1205 08:39:25.084273 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" event={"ID":"47fba3c8-cdfe-4395-a921-933521a08de8","Type":"ContainerDied","Data":"9fa6ec8dd8e3cb0f952fb91540548a78b6517f02d84abf16aa067f83d01767e3"} Dec 05 08:39:25 crc kubenswrapper[4780]: I1205 08:39:25.084344 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf" Dec 05 08:39:25 crc kubenswrapper[4780]: I1205 08:39:25.084350 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fa6ec8dd8e3cb0f952fb91540548a78b6517f02d84abf16aa067f83d01767e3" Dec 05 08:39:31 crc kubenswrapper[4780]: I1205 08:39:31.139411 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:39:31 crc kubenswrapper[4780]: E1205 08:39:31.140096 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.881297 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-kp2b5"] Dec 05 08:39:35 crc kubenswrapper[4780]: E1205 08:39:35.882249 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerName="extract-utilities" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.882264 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerName="extract-utilities" Dec 05 08:39:35 crc kubenswrapper[4780]: E1205 08:39:35.882273 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerName="registry-server" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.882279 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerName="registry-server" Dec 05 08:39:35 crc kubenswrapper[4780]: E1205 08:39:35.882296 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47fba3c8-cdfe-4395-a921-933521a08de8" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.882304 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="47fba3c8-cdfe-4395-a921-933521a08de8" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 05 08:39:35 crc kubenswrapper[4780]: E1205 08:39:35.882316 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be590067-4a13-4b32-8ac4-aac7696e1762" containerName="extract-utilities" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.882322 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="be590067-4a13-4b32-8ac4-aac7696e1762" containerName="extract-utilities" Dec 05 08:39:35 crc kubenswrapper[4780]: E1205 08:39:35.882332 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerName="extract-content" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.882337 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerName="extract-content" Dec 05 08:39:35 crc kubenswrapper[4780]: E1205 08:39:35.882352 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be590067-4a13-4b32-8ac4-aac7696e1762" containerName="registry-server" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.882358 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="be590067-4a13-4b32-8ac4-aac7696e1762" containerName="registry-server" Dec 05 08:39:35 crc kubenswrapper[4780]: E1205 08:39:35.882373 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be590067-4a13-4b32-8ac4-aac7696e1762" containerName="extract-content" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.882379 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="be590067-4a13-4b32-8ac4-aac7696e1762" containerName="extract-content" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.882550 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="be590067-4a13-4b32-8ac4-aac7696e1762" containerName="registry-server" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.882575 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="47fba3c8-cdfe-4395-a921-933521a08de8" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.882588 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a28222-a43d-4728-9b13-4ddf4ef66375" containerName="registry-server" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.883308 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.892374 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.892794 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.892946 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.898279 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:39:35 crc kubenswrapper[4780]: I1205 08:39:35.902484 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-kp2b5"] Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.053199 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46z7n\" (UniqueName: \"kubernetes.io/projected/68a58c6a-0807-4975-aa44-5963fb679676-kube-api-access-46z7n\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.053274 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.053295 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.053427 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-inventory\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.155066 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-inventory\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.155134 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46z7n\" (UniqueName: \"kubernetes.io/projected/68a58c6a-0807-4975-aa44-5963fb679676-kube-api-access-46z7n\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.155175 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.155191 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.161484 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-inventory\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.162696 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.168991 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.173202 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46z7n\" (UniqueName: \"kubernetes.io/projected/68a58c6a-0807-4975-aa44-5963fb679676-kube-api-access-46z7n\") pod \"bootstrap-openstack-openstack-cell1-kp2b5\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.202345 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.741586 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-kp2b5"] Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.746132 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 08:39:36 crc kubenswrapper[4780]: I1205 08:39:36.978211 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" event={"ID":"68a58c6a-0807-4975-aa44-5963fb679676","Type":"ContainerStarted","Data":"017d51d631f9cda235b435a05455202ecc0d2c07c938185ed902ab07f0dd61f2"} Dec 05 08:39:37 crc kubenswrapper[4780]: I1205 08:39:37.989168 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" event={"ID":"68a58c6a-0807-4975-aa44-5963fb679676","Type":"ContainerStarted","Data":"1ba0514aa9e118a6fa62f32ad5846a1adbc36eb5637c512821dfe8c6d5f82830"} Dec 05 08:39:38 crc kubenswrapper[4780]: I1205 08:39:38.023617 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" podStartSLOduration=2.509935573 podStartE2EDuration="3.023593398s" podCreationTimestamp="2025-12-05 08:39:35 +0000 UTC" firstStartedPulling="2025-12-05 08:39:36.745896242 +0000 UTC m=+6810.815412574" lastFinishedPulling="2025-12-05 08:39:37.259554067 +0000 UTC m=+6811.329070399" observedRunningTime="2025-12-05 08:39:38.017926034 +0000 UTC m=+6812.087442376" watchObservedRunningTime="2025-12-05 08:39:38.023593398 +0000 UTC m=+6812.093109730" Dec 05 08:39:46 crc kubenswrapper[4780]: I1205 08:39:46.148415 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:39:46 crc kubenswrapper[4780]: E1205 08:39:46.149102 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:39:58 crc kubenswrapper[4780]: I1205 08:39:58.138798 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:39:58 crc kubenswrapper[4780]: E1205 08:39:58.139681 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:40:09 crc kubenswrapper[4780]: I1205 08:40:09.139607 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:40:09 crc kubenswrapper[4780]: E1205 08:40:09.140411 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:40:20 crc kubenswrapper[4780]: I1205 08:40:20.139087 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:40:20 crc kubenswrapper[4780]: E1205 08:40:20.139823 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:40:33 crc kubenswrapper[4780]: I1205 08:40:33.139443 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:40:33 crc kubenswrapper[4780]: E1205 08:40:33.140565 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:40:48 crc kubenswrapper[4780]: I1205 08:40:48.139211 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:40:48 crc kubenswrapper[4780]: E1205 08:40:48.140047 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:41:00 crc kubenswrapper[4780]: I1205 08:41:00.138794 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:41:00 crc kubenswrapper[4780]: E1205 08:41:00.139701 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:41:15 crc kubenswrapper[4780]: I1205 08:41:15.139975 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:41:15 crc kubenswrapper[4780]: E1205 08:41:15.140776 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:41:29 crc kubenswrapper[4780]: I1205 08:41:29.140101 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:41:29 crc kubenswrapper[4780]: E1205 08:41:29.140857 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:41:42 crc kubenswrapper[4780]: I1205 08:41:42.138834 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:41:42 crc kubenswrapper[4780]: E1205 08:41:42.139721 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:41:56 crc kubenswrapper[4780]: I1205 08:41:56.145500 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:41:56 crc kubenswrapper[4780]: E1205 08:41:56.146214 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:42:11 crc kubenswrapper[4780]: I1205 08:42:11.138719 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:42:11 crc kubenswrapper[4780]: E1205 08:42:11.139503 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:42:24 crc kubenswrapper[4780]: I1205 08:42:24.138788 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:42:24 crc kubenswrapper[4780]: E1205 08:42:24.139900 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:42:35 crc kubenswrapper[4780]: I1205 08:42:35.140543 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:42:35 crc kubenswrapper[4780]: E1205 08:42:35.141938 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:42:37 crc kubenswrapper[4780]: I1205 08:42:37.019032 4780 generic.go:334] "Generic (PLEG): container finished" podID="68a58c6a-0807-4975-aa44-5963fb679676" containerID="1ba0514aa9e118a6fa62f32ad5846a1adbc36eb5637c512821dfe8c6d5f82830" exitCode=0 Dec 05 08:42:37 crc kubenswrapper[4780]: I1205 08:42:37.019464 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" event={"ID":"68a58c6a-0807-4975-aa44-5963fb679676","Type":"ContainerDied","Data":"1ba0514aa9e118a6fa62f32ad5846a1adbc36eb5637c512821dfe8c6d5f82830"} Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.506085 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.529472 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-inventory\") pod \"68a58c6a-0807-4975-aa44-5963fb679676\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.529534 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-bootstrap-combined-ca-bundle\") pod \"68a58c6a-0807-4975-aa44-5963fb679676\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.529832 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46z7n\" (UniqueName: \"kubernetes.io/projected/68a58c6a-0807-4975-aa44-5963fb679676-kube-api-access-46z7n\") pod \"68a58c6a-0807-4975-aa44-5963fb679676\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.529982 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-ssh-key\") pod \"68a58c6a-0807-4975-aa44-5963fb679676\" (UID: \"68a58c6a-0807-4975-aa44-5963fb679676\") " Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.537574 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a58c6a-0807-4975-aa44-5963fb679676-kube-api-access-46z7n" (OuterVolumeSpecName: "kube-api-access-46z7n") pod "68a58c6a-0807-4975-aa44-5963fb679676" (UID: "68a58c6a-0807-4975-aa44-5963fb679676"). InnerVolumeSpecName "kube-api-access-46z7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.539723 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "68a58c6a-0807-4975-aa44-5963fb679676" (UID: "68a58c6a-0807-4975-aa44-5963fb679676"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.563985 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-inventory" (OuterVolumeSpecName: "inventory") pod "68a58c6a-0807-4975-aa44-5963fb679676" (UID: "68a58c6a-0807-4975-aa44-5963fb679676"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.564352 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "68a58c6a-0807-4975-aa44-5963fb679676" (UID: "68a58c6a-0807-4975-aa44-5963fb679676"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.635996 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.636531 4780 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.636548 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46z7n\" (UniqueName: \"kubernetes.io/projected/68a58c6a-0807-4975-aa44-5963fb679676-kube-api-access-46z7n\") on node \"crc\" DevicePath \"\"" Dec 05 08:42:38 crc kubenswrapper[4780]: I1205 08:42:38.636558 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/68a58c6a-0807-4975-aa44-5963fb679676-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.036482 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" event={"ID":"68a58c6a-0807-4975-aa44-5963fb679676","Type":"ContainerDied","Data":"017d51d631f9cda235b435a05455202ecc0d2c07c938185ed902ab07f0dd61f2"} Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.036521 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="017d51d631f9cda235b435a05455202ecc0d2c07c938185ed902ab07f0dd61f2" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.036539 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-kp2b5" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.129183 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-hkp2s"] Dec 05 08:42:39 crc kubenswrapper[4780]: E1205 08:42:39.129767 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a58c6a-0807-4975-aa44-5963fb679676" containerName="bootstrap-openstack-openstack-cell1" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.129792 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a58c6a-0807-4975-aa44-5963fb679676" containerName="bootstrap-openstack-openstack-cell1" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.130114 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a58c6a-0807-4975-aa44-5963fb679676" containerName="bootstrap-openstack-openstack-cell1" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.131097 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.137277 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-hkp2s"] Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.139148 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.139159 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.142078 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.142210 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.249019 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zm5m\" (UniqueName: \"kubernetes.io/projected/b7de0c87-8682-4630-acd8-c79d7cfa4884-kube-api-access-9zm5m\") pod \"download-cache-openstack-openstack-cell1-hkp2s\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.249170 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-ssh-key\") pod \"download-cache-openstack-openstack-cell1-hkp2s\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.249264 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-inventory\") pod \"download-cache-openstack-openstack-cell1-hkp2s\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.350679 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zm5m\" (UniqueName: \"kubernetes.io/projected/b7de0c87-8682-4630-acd8-c79d7cfa4884-kube-api-access-9zm5m\") pod \"download-cache-openstack-openstack-cell1-hkp2s\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.350781 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-ssh-key\") pod \"download-cache-openstack-openstack-cell1-hkp2s\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.350868 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-inventory\") pod \"download-cache-openstack-openstack-cell1-hkp2s\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.355024 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-ssh-key\") pod \"download-cache-openstack-openstack-cell1-hkp2s\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.356246 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-inventory\") pod \"download-cache-openstack-openstack-cell1-hkp2s\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.368001 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zm5m\" (UniqueName: \"kubernetes.io/projected/b7de0c87-8682-4630-acd8-c79d7cfa4884-kube-api-access-9zm5m\") pod \"download-cache-openstack-openstack-cell1-hkp2s\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:39 crc kubenswrapper[4780]: I1205 08:42:39.448039 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:42:40 crc kubenswrapper[4780]: I1205 08:42:40.008297 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-hkp2s"] Dec 05 08:42:40 crc kubenswrapper[4780]: I1205 08:42:40.044247 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" event={"ID":"b7de0c87-8682-4630-acd8-c79d7cfa4884","Type":"ContainerStarted","Data":"98955da1095374afb70f672e009871c13be267f9f753b72fdb485f92957be320"} Dec 05 08:42:41 crc kubenswrapper[4780]: I1205 08:42:41.075345 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" event={"ID":"b7de0c87-8682-4630-acd8-c79d7cfa4884","Type":"ContainerStarted","Data":"3c44171d87e3434174001499ad46400d3da6f2c0bb629787a9a7cc46459a3b35"} Dec 05 08:42:41 crc kubenswrapper[4780]: I1205 08:42:41.096023 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" podStartSLOduration=1.561453582 podStartE2EDuration="2.095997231s" podCreationTimestamp="2025-12-05 08:42:39 +0000 UTC" firstStartedPulling="2025-12-05 08:42:40.010480097 +0000 UTC m=+6994.079996429" lastFinishedPulling="2025-12-05 08:42:40.545023746 +0000 UTC m=+6994.614540078" observedRunningTime="2025-12-05 08:42:41.092730353 +0000 UTC m=+6995.162246705" watchObservedRunningTime="2025-12-05 08:42:41.095997231 +0000 UTC m=+6995.165513553" Dec 05 08:42:49 crc kubenswrapper[4780]: I1205 08:42:49.138510 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:42:49 crc kubenswrapper[4780]: E1205 08:42:49.139526 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:43:01 crc kubenswrapper[4780]: I1205 08:43:01.139854 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:43:02 crc kubenswrapper[4780]: I1205 08:43:02.281576 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"1cd623fd8841f50a04258d5eda65d0ec55ba5d98c7293c90f5681bc480249024"} Dec 05 08:44:09 crc kubenswrapper[4780]: I1205 08:44:09.918431 4780 generic.go:334] "Generic (PLEG): container finished" podID="b7de0c87-8682-4630-acd8-c79d7cfa4884" containerID="3c44171d87e3434174001499ad46400d3da6f2c0bb629787a9a7cc46459a3b35" exitCode=0 Dec 05 08:44:09 crc kubenswrapper[4780]: I1205 08:44:09.918555 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" event={"ID":"b7de0c87-8682-4630-acd8-c79d7cfa4884","Type":"ContainerDied","Data":"3c44171d87e3434174001499ad46400d3da6f2c0bb629787a9a7cc46459a3b35"} Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.357512 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.458458 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zm5m\" (UniqueName: \"kubernetes.io/projected/b7de0c87-8682-4630-acd8-c79d7cfa4884-kube-api-access-9zm5m\") pod \"b7de0c87-8682-4630-acd8-c79d7cfa4884\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.459016 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-ssh-key\") pod \"b7de0c87-8682-4630-acd8-c79d7cfa4884\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.459040 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-inventory\") pod \"b7de0c87-8682-4630-acd8-c79d7cfa4884\" (UID: \"b7de0c87-8682-4630-acd8-c79d7cfa4884\") " Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.467385 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7de0c87-8682-4630-acd8-c79d7cfa4884-kube-api-access-9zm5m" (OuterVolumeSpecName: "kube-api-access-9zm5m") pod "b7de0c87-8682-4630-acd8-c79d7cfa4884" (UID: "b7de0c87-8682-4630-acd8-c79d7cfa4884"). InnerVolumeSpecName "kube-api-access-9zm5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.487798 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b7de0c87-8682-4630-acd8-c79d7cfa4884" (UID: "b7de0c87-8682-4630-acd8-c79d7cfa4884"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.489810 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-inventory" (OuterVolumeSpecName: "inventory") pod "b7de0c87-8682-4630-acd8-c79d7cfa4884" (UID: "b7de0c87-8682-4630-acd8-c79d7cfa4884"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.562932 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.562984 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7de0c87-8682-4630-acd8-c79d7cfa4884-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.562998 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zm5m\" (UniqueName: \"kubernetes.io/projected/b7de0c87-8682-4630-acd8-c79d7cfa4884-kube-api-access-9zm5m\") on node \"crc\" DevicePath \"\"" Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.936416 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" event={"ID":"b7de0c87-8682-4630-acd8-c79d7cfa4884","Type":"ContainerDied","Data":"98955da1095374afb70f672e009871c13be267f9f753b72fdb485f92957be320"} Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.936468 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98955da1095374afb70f672e009871c13be267f9f753b72fdb485f92957be320" Dec 05 08:44:11 crc kubenswrapper[4780]: I1205 08:44:11.936474 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-hkp2s" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.020796 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-mnk58"] Dec 05 08:44:12 crc kubenswrapper[4780]: E1205 08:44:12.021298 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7de0c87-8682-4630-acd8-c79d7cfa4884" containerName="download-cache-openstack-openstack-cell1" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.021325 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7de0c87-8682-4630-acd8-c79d7cfa4884" containerName="download-cache-openstack-openstack-cell1" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.021603 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7de0c87-8682-4630-acd8-c79d7cfa4884" containerName="download-cache-openstack-openstack-cell1" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.022560 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.029244 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.029362 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.029425 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.029455 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.031247 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-mnk58"] Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.174779 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-ssh-key\") pod \"configure-network-openstack-openstack-cell1-mnk58\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.175404 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-inventory\") pod \"configure-network-openstack-openstack-cell1-mnk58\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.175823 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh7wv\" (UniqueName: \"kubernetes.io/projected/dcdd5c3f-33bd-4587-8757-39d37d86d865-kube-api-access-vh7wv\") pod \"configure-network-openstack-openstack-cell1-mnk58\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.278129 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-inventory\") pod \"configure-network-openstack-openstack-cell1-mnk58\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.278247 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh7wv\" (UniqueName: \"kubernetes.io/projected/dcdd5c3f-33bd-4587-8757-39d37d86d865-kube-api-access-vh7wv\") pod \"configure-network-openstack-openstack-cell1-mnk58\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.278297 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-ssh-key\") pod \"configure-network-openstack-openstack-cell1-mnk58\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.282583 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-inventory\") pod \"configure-network-openstack-openstack-cell1-mnk58\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.283952 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-ssh-key\") pod \"configure-network-openstack-openstack-cell1-mnk58\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.302834 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh7wv\" (UniqueName: \"kubernetes.io/projected/dcdd5c3f-33bd-4587-8757-39d37d86d865-kube-api-access-vh7wv\") pod \"configure-network-openstack-openstack-cell1-mnk58\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.342329 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:44:12 crc kubenswrapper[4780]: I1205 08:44:12.974524 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-mnk58"] Dec 05 08:44:13 crc kubenswrapper[4780]: I1205 08:44:13.957957 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-mnk58" event={"ID":"dcdd5c3f-33bd-4587-8757-39d37d86d865","Type":"ContainerStarted","Data":"27b46a3fc6022e035d5da0af758c1801aaed6acc810b3e5d9858c06ebfb8a42b"} Dec 05 08:44:13 crc kubenswrapper[4780]: I1205 08:44:13.959159 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-mnk58" event={"ID":"dcdd5c3f-33bd-4587-8757-39d37d86d865","Type":"ContainerStarted","Data":"36c6e1b7e2b6a9787ec6586ec3c9c9ba94f72e09233275f129447c43485bf5a9"} Dec 05 08:44:13 crc kubenswrapper[4780]: I1205 08:44:13.987345 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-mnk58" podStartSLOduration=1.407867975 podStartE2EDuration="1.98731517s" podCreationTimestamp="2025-12-05 08:44:12 +0000 UTC" firstStartedPulling="2025-12-05 08:44:12.986449058 +0000 UTC m=+7087.055965390" lastFinishedPulling="2025-12-05 08:44:13.565896253 +0000 UTC m=+7087.635412585" observedRunningTime="2025-12-05 08:44:13.975018037 +0000 UTC m=+7088.044534389" watchObservedRunningTime="2025-12-05 08:44:13.98731517 +0000 UTC m=+7088.056831502" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.154256 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg"] Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.156920 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.159720 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.160518 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.185811 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg"] Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.258097 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9c53a41-5485-4ba5-b4d8-c612ba293495-config-volume\") pod \"collect-profiles-29415405-s6tgg\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.258148 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thp8s\" (UniqueName: \"kubernetes.io/projected/b9c53a41-5485-4ba5-b4d8-c612ba293495-kube-api-access-thp8s\") pod \"collect-profiles-29415405-s6tgg\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.258253 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9c53a41-5485-4ba5-b4d8-c612ba293495-secret-volume\") pod \"collect-profiles-29415405-s6tgg\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.362345 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9c53a41-5485-4ba5-b4d8-c612ba293495-secret-volume\") pod \"collect-profiles-29415405-s6tgg\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.362633 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9c53a41-5485-4ba5-b4d8-c612ba293495-config-volume\") pod \"collect-profiles-29415405-s6tgg\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.362680 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thp8s\" (UniqueName: \"kubernetes.io/projected/b9c53a41-5485-4ba5-b4d8-c612ba293495-kube-api-access-thp8s\") pod \"collect-profiles-29415405-s6tgg\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.363971 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9c53a41-5485-4ba5-b4d8-c612ba293495-config-volume\") pod \"collect-profiles-29415405-s6tgg\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.368106 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9c53a41-5485-4ba5-b4d8-c612ba293495-secret-volume\") pod \"collect-profiles-29415405-s6tgg\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.377840 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thp8s\" (UniqueName: \"kubernetes.io/projected/b9c53a41-5485-4ba5-b4d8-c612ba293495-kube-api-access-thp8s\") pod \"collect-profiles-29415405-s6tgg\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.486814 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:00 crc kubenswrapper[4780]: I1205 08:45:00.924328 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg"] Dec 05 08:45:01 crc kubenswrapper[4780]: I1205 08:45:01.352662 4780 generic.go:334] "Generic (PLEG): container finished" podID="b9c53a41-5485-4ba5-b4d8-c612ba293495" containerID="954fa4708dd177b493a052477a5ce9bb9cccd33e76b304be847f66977d8c73ea" exitCode=0 Dec 05 08:45:01 crc kubenswrapper[4780]: I1205 08:45:01.352990 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" event={"ID":"b9c53a41-5485-4ba5-b4d8-c612ba293495","Type":"ContainerDied","Data":"954fa4708dd177b493a052477a5ce9bb9cccd33e76b304be847f66977d8c73ea"} Dec 05 08:45:01 crc kubenswrapper[4780]: I1205 08:45:01.353043 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" event={"ID":"b9c53a41-5485-4ba5-b4d8-c612ba293495","Type":"ContainerStarted","Data":"16ce2c2f7222dacf4ff461f9d255adf8d345cc77574a05ab7c16ec7da48b8e3a"} Dec 05 08:45:02 crc kubenswrapper[4780]: I1205 08:45:02.722863 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:02 crc kubenswrapper[4780]: I1205 08:45:02.819827 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thp8s\" (UniqueName: \"kubernetes.io/projected/b9c53a41-5485-4ba5-b4d8-c612ba293495-kube-api-access-thp8s\") pod \"b9c53a41-5485-4ba5-b4d8-c612ba293495\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " Dec 05 08:45:02 crc kubenswrapper[4780]: I1205 08:45:02.819998 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9c53a41-5485-4ba5-b4d8-c612ba293495-config-volume\") pod \"b9c53a41-5485-4ba5-b4d8-c612ba293495\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " Dec 05 08:45:02 crc kubenswrapper[4780]: I1205 08:45:02.820315 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9c53a41-5485-4ba5-b4d8-c612ba293495-secret-volume\") pod \"b9c53a41-5485-4ba5-b4d8-c612ba293495\" (UID: \"b9c53a41-5485-4ba5-b4d8-c612ba293495\") " Dec 05 08:45:02 crc kubenswrapper[4780]: I1205 08:45:02.820986 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c53a41-5485-4ba5-b4d8-c612ba293495-config-volume" (OuterVolumeSpecName: "config-volume") pod "b9c53a41-5485-4ba5-b4d8-c612ba293495" (UID: "b9c53a41-5485-4ba5-b4d8-c612ba293495"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:45:02 crc kubenswrapper[4780]: I1205 08:45:02.826164 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c53a41-5485-4ba5-b4d8-c612ba293495-kube-api-access-thp8s" (OuterVolumeSpecName: "kube-api-access-thp8s") pod "b9c53a41-5485-4ba5-b4d8-c612ba293495" (UID: "b9c53a41-5485-4ba5-b4d8-c612ba293495"). InnerVolumeSpecName "kube-api-access-thp8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:45:02 crc kubenswrapper[4780]: I1205 08:45:02.826403 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c53a41-5485-4ba5-b4d8-c612ba293495-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b9c53a41-5485-4ba5-b4d8-c612ba293495" (UID: "b9c53a41-5485-4ba5-b4d8-c612ba293495"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:45:02 crc kubenswrapper[4780]: I1205 08:45:02.923178 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thp8s\" (UniqueName: \"kubernetes.io/projected/b9c53a41-5485-4ba5-b4d8-c612ba293495-kube-api-access-thp8s\") on node \"crc\" DevicePath \"\"" Dec 05 08:45:02 crc kubenswrapper[4780]: I1205 08:45:02.923221 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9c53a41-5485-4ba5-b4d8-c612ba293495-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 08:45:02 crc kubenswrapper[4780]: I1205 08:45:02.923235 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9c53a41-5485-4ba5-b4d8-c612ba293495-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 08:45:03 crc kubenswrapper[4780]: I1205 08:45:03.374920 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" event={"ID":"b9c53a41-5485-4ba5-b4d8-c612ba293495","Type":"ContainerDied","Data":"16ce2c2f7222dacf4ff461f9d255adf8d345cc77574a05ab7c16ec7da48b8e3a"} Dec 05 08:45:03 crc kubenswrapper[4780]: I1205 08:45:03.374963 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg" Dec 05 08:45:03 crc kubenswrapper[4780]: I1205 08:45:03.374978 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ce2c2f7222dacf4ff461f9d255adf8d345cc77574a05ab7c16ec7da48b8e3a" Dec 05 08:45:03 crc kubenswrapper[4780]: I1205 08:45:03.800593 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls"] Dec 05 08:45:03 crc kubenswrapper[4780]: I1205 08:45:03.809168 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415360-hrbls"] Dec 05 08:45:04 crc kubenswrapper[4780]: I1205 08:45:04.150945 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f81a97-0e86-4d9b-adde-f4a0810c763a" path="/var/lib/kubelet/pods/69f81a97-0e86-4d9b-adde-f4a0810c763a/volumes" Dec 05 08:45:29 crc kubenswrapper[4780]: I1205 08:45:29.609131 4780 generic.go:334] "Generic (PLEG): container finished" podID="dcdd5c3f-33bd-4587-8757-39d37d86d865" containerID="27b46a3fc6022e035d5da0af758c1801aaed6acc810b3e5d9858c06ebfb8a42b" exitCode=0 Dec 05 08:45:29 crc kubenswrapper[4780]: I1205 08:45:29.609219 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-mnk58" event={"ID":"dcdd5c3f-33bd-4587-8757-39d37d86d865","Type":"ContainerDied","Data":"27b46a3fc6022e035d5da0af758c1801aaed6acc810b3e5d9858c06ebfb8a42b"} Dec 05 08:45:29 crc kubenswrapper[4780]: I1205 08:45:29.907720 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:45:29 crc kubenswrapper[4780]: I1205 08:45:29.907780 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.078835 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.225108 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh7wv\" (UniqueName: \"kubernetes.io/projected/dcdd5c3f-33bd-4587-8757-39d37d86d865-kube-api-access-vh7wv\") pod \"dcdd5c3f-33bd-4587-8757-39d37d86d865\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.225262 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-inventory\") pod \"dcdd5c3f-33bd-4587-8757-39d37d86d865\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.225356 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-ssh-key\") pod \"dcdd5c3f-33bd-4587-8757-39d37d86d865\" (UID: \"dcdd5c3f-33bd-4587-8757-39d37d86d865\") " Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.232595 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdd5c3f-33bd-4587-8757-39d37d86d865-kube-api-access-vh7wv" (OuterVolumeSpecName: "kube-api-access-vh7wv") pod "dcdd5c3f-33bd-4587-8757-39d37d86d865" (UID: "dcdd5c3f-33bd-4587-8757-39d37d86d865"). InnerVolumeSpecName "kube-api-access-vh7wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.255829 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-inventory" (OuterVolumeSpecName: "inventory") pod "dcdd5c3f-33bd-4587-8757-39d37d86d865" (UID: "dcdd5c3f-33bd-4587-8757-39d37d86d865"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.258351 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dcdd5c3f-33bd-4587-8757-39d37d86d865" (UID: "dcdd5c3f-33bd-4587-8757-39d37d86d865"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.327447 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.328032 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh7wv\" (UniqueName: \"kubernetes.io/projected/dcdd5c3f-33bd-4587-8757-39d37d86d865-kube-api-access-vh7wv\") on node \"crc\" DevicePath \"\"" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.328129 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcdd5c3f-33bd-4587-8757-39d37d86d865-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.627978 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-mnk58" event={"ID":"dcdd5c3f-33bd-4587-8757-39d37d86d865","Type":"ContainerDied","Data":"36c6e1b7e2b6a9787ec6586ec3c9c9ba94f72e09233275f129447c43485bf5a9"} Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.628271 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36c6e1b7e2b6a9787ec6586ec3c9c9ba94f72e09233275f129447c43485bf5a9" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.628413 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-mnk58" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.730977 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-dp96t"] Dec 05 08:45:31 crc kubenswrapper[4780]: E1205 08:45:31.731909 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdd5c3f-33bd-4587-8757-39d37d86d865" containerName="configure-network-openstack-openstack-cell1" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.732053 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdd5c3f-33bd-4587-8757-39d37d86d865" containerName="configure-network-openstack-openstack-cell1" Dec 05 08:45:31 crc kubenswrapper[4780]: E1205 08:45:31.732152 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c53a41-5485-4ba5-b4d8-c612ba293495" containerName="collect-profiles" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.732234 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c53a41-5485-4ba5-b4d8-c612ba293495" containerName="collect-profiles" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.732593 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c53a41-5485-4ba5-b4d8-c612ba293495" containerName="collect-profiles" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.732675 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdd5c3f-33bd-4587-8757-39d37d86d865" containerName="configure-network-openstack-openstack-cell1" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.734055 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.741864 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.742020 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.742167 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.742294 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.745270 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-dp96t"] Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.836927 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdwsp\" (UniqueName: \"kubernetes.io/projected/1df681e3-86d2-4b66-9dc8-52ce67e207dc-kube-api-access-qdwsp\") pod \"validate-network-openstack-openstack-cell1-dp96t\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.836970 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-ssh-key\") pod \"validate-network-openstack-openstack-cell1-dp96t\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.837343 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-inventory\") pod \"validate-network-openstack-openstack-cell1-dp96t\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.940065 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-inventory\") pod \"validate-network-openstack-openstack-cell1-dp96t\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.940261 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdwsp\" (UniqueName: \"kubernetes.io/projected/1df681e3-86d2-4b66-9dc8-52ce67e207dc-kube-api-access-qdwsp\") pod \"validate-network-openstack-openstack-cell1-dp96t\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.940304 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-ssh-key\") pod \"validate-network-openstack-openstack-cell1-dp96t\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.945036 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-inventory\") pod \"validate-network-openstack-openstack-cell1-dp96t\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.954957 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-ssh-key\") pod \"validate-network-openstack-openstack-cell1-dp96t\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:31 crc kubenswrapper[4780]: I1205 08:45:31.962147 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdwsp\" (UniqueName: \"kubernetes.io/projected/1df681e3-86d2-4b66-9dc8-52ce67e207dc-kube-api-access-qdwsp\") pod \"validate-network-openstack-openstack-cell1-dp96t\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:32 crc kubenswrapper[4780]: I1205 08:45:32.054764 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:32 crc kubenswrapper[4780]: I1205 08:45:32.624187 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-dp96t"] Dec 05 08:45:32 crc kubenswrapper[4780]: I1205 08:45:32.628348 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 08:45:32 crc kubenswrapper[4780]: I1205 08:45:32.638428 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-dp96t" event={"ID":"1df681e3-86d2-4b66-9dc8-52ce67e207dc","Type":"ContainerStarted","Data":"23c837b6809f68f57c49e041f6fe093aae1e399233031e32a166d3fd6f06f8a5"} Dec 05 08:45:34 crc kubenswrapper[4780]: I1205 08:45:34.660626 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-dp96t" event={"ID":"1df681e3-86d2-4b66-9dc8-52ce67e207dc","Type":"ContainerStarted","Data":"1eadcbc95546864bfecddb72b5bad3e89a9382c35128637c35b29c85f15a9091"} Dec 05 08:45:34 crc kubenswrapper[4780]: I1205 08:45:34.684413 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-dp96t" podStartSLOduration=3.259981964 podStartE2EDuration="3.684384282s" podCreationTimestamp="2025-12-05 08:45:31 +0000 UTC" firstStartedPulling="2025-12-05 08:45:32.628132421 +0000 UTC m=+7166.697648743" lastFinishedPulling="2025-12-05 08:45:33.052534739 +0000 UTC m=+7167.122051061" observedRunningTime="2025-12-05 08:45:34.67766817 +0000 UTC m=+7168.747184502" watchObservedRunningTime="2025-12-05 08:45:34.684384282 +0000 UTC m=+7168.753900614" Dec 05 08:45:35 crc kubenswrapper[4780]: I1205 08:45:35.079154 4780 scope.go:117] "RemoveContainer" containerID="6edef5c8914467c4e032e854c3fad9b8fd7ebb3463143be9156eb21e1e288b60" Dec 05 08:45:39 crc kubenswrapper[4780]: I1205 08:45:39.706823 4780 generic.go:334] "Generic (PLEG): container finished" podID="1df681e3-86d2-4b66-9dc8-52ce67e207dc" containerID="1eadcbc95546864bfecddb72b5bad3e89a9382c35128637c35b29c85f15a9091" exitCode=0 Dec 05 08:45:39 crc kubenswrapper[4780]: I1205 08:45:39.706928 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-dp96t" event={"ID":"1df681e3-86d2-4b66-9dc8-52ce67e207dc","Type":"ContainerDied","Data":"1eadcbc95546864bfecddb72b5bad3e89a9382c35128637c35b29c85f15a9091"} Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.241672 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.338173 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-inventory\") pod \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.338410 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-ssh-key\") pod \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.338508 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdwsp\" (UniqueName: \"kubernetes.io/projected/1df681e3-86d2-4b66-9dc8-52ce67e207dc-kube-api-access-qdwsp\") pod \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\" (UID: \"1df681e3-86d2-4b66-9dc8-52ce67e207dc\") " Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.394413 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df681e3-86d2-4b66-9dc8-52ce67e207dc-kube-api-access-qdwsp" (OuterVolumeSpecName: "kube-api-access-qdwsp") pod "1df681e3-86d2-4b66-9dc8-52ce67e207dc" (UID: "1df681e3-86d2-4b66-9dc8-52ce67e207dc"). InnerVolumeSpecName "kube-api-access-qdwsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.408430 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1df681e3-86d2-4b66-9dc8-52ce67e207dc" (UID: "1df681e3-86d2-4b66-9dc8-52ce67e207dc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.415090 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-inventory" (OuterVolumeSpecName: "inventory") pod "1df681e3-86d2-4b66-9dc8-52ce67e207dc" (UID: "1df681e3-86d2-4b66-9dc8-52ce67e207dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.442595 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.442628 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1df681e3-86d2-4b66-9dc8-52ce67e207dc-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.442640 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdwsp\" (UniqueName: \"kubernetes.io/projected/1df681e3-86d2-4b66-9dc8-52ce67e207dc-kube-api-access-qdwsp\") on node \"crc\" DevicePath \"\"" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.730724 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-dp96t" event={"ID":"1df681e3-86d2-4b66-9dc8-52ce67e207dc","Type":"ContainerDied","Data":"23c837b6809f68f57c49e041f6fe093aae1e399233031e32a166d3fd6f06f8a5"} Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.730770 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23c837b6809f68f57c49e041f6fe093aae1e399233031e32a166d3fd6f06f8a5" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.730835 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-dp96t" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.800776 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-z89qw"] Dec 05 08:45:41 crc kubenswrapper[4780]: E1205 08:45:41.801349 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df681e3-86d2-4b66-9dc8-52ce67e207dc" containerName="validate-network-openstack-openstack-cell1" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.801372 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df681e3-86d2-4b66-9dc8-52ce67e207dc" containerName="validate-network-openstack-openstack-cell1" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.801662 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df681e3-86d2-4b66-9dc8-52ce67e207dc" containerName="validate-network-openstack-openstack-cell1" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.802594 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.805011 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.805531 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.805721 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.806058 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.815911 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-z89qw"] Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.959058 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-inventory\") pod \"install-os-openstack-openstack-cell1-z89qw\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.959171 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4kqm\" (UniqueName: \"kubernetes.io/projected/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-kube-api-access-d4kqm\") pod \"install-os-openstack-openstack-cell1-z89qw\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:41 crc kubenswrapper[4780]: I1205 08:45:41.959209 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-ssh-key\") pod \"install-os-openstack-openstack-cell1-z89qw\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:42 crc kubenswrapper[4780]: I1205 08:45:42.061272 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-inventory\") pod \"install-os-openstack-openstack-cell1-z89qw\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:42 crc kubenswrapper[4780]: I1205 08:45:42.061417 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4kqm\" (UniqueName: \"kubernetes.io/projected/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-kube-api-access-d4kqm\") pod \"install-os-openstack-openstack-cell1-z89qw\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:42 crc kubenswrapper[4780]: I1205 08:45:42.061456 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-ssh-key\") pod \"install-os-openstack-openstack-cell1-z89qw\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:42 crc kubenswrapper[4780]: I1205 08:45:42.074824 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-ssh-key\") pod \"install-os-openstack-openstack-cell1-z89qw\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:42 crc kubenswrapper[4780]: I1205 08:45:42.074960 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-inventory\") pod \"install-os-openstack-openstack-cell1-z89qw\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:42 crc kubenswrapper[4780]: I1205 08:45:42.077808 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4kqm\" (UniqueName: \"kubernetes.io/projected/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-kube-api-access-d4kqm\") pod \"install-os-openstack-openstack-cell1-z89qw\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:42 crc kubenswrapper[4780]: I1205 08:45:42.119578 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:45:42 crc kubenswrapper[4780]: I1205 08:45:42.661184 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-z89qw"] Dec 05 08:45:42 crc kubenswrapper[4780]: I1205 08:45:42.743853 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-z89qw" event={"ID":"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7","Type":"ContainerStarted","Data":"fddb7503f0856614112daaf568aa14987c3bf66e488dbbd71fcf009661eb1152"} Dec 05 08:45:43 crc kubenswrapper[4780]: I1205 08:45:43.752147 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-z89qw" event={"ID":"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7","Type":"ContainerStarted","Data":"c209fb1620d2fd86cf0f77f5687422dd216a77faf88e61af6192cf4192ccffeb"} Dec 05 08:45:59 crc kubenswrapper[4780]: I1205 08:45:59.907908 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:45:59 crc kubenswrapper[4780]: I1205 08:45:59.908710 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:46:26 crc kubenswrapper[4780]: I1205 08:46:26.122422 4780 generic.go:334] "Generic (PLEG): container finished" podID="9a5eaea9-50ed-4f64-8c7e-4f92de056ed7" containerID="c209fb1620d2fd86cf0f77f5687422dd216a77faf88e61af6192cf4192ccffeb" exitCode=0 Dec 05 08:46:26 crc kubenswrapper[4780]: I1205 08:46:26.122730 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-z89qw" event={"ID":"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7","Type":"ContainerDied","Data":"c209fb1620d2fd86cf0f77f5687422dd216a77faf88e61af6192cf4192ccffeb"} Dec 05 08:46:27 crc kubenswrapper[4780]: I1205 08:46:27.603982 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:46:27 crc kubenswrapper[4780]: I1205 08:46:27.728054 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-inventory\") pod \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " Dec 05 08:46:27 crc kubenswrapper[4780]: I1205 08:46:27.728226 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-ssh-key\") pod \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " Dec 05 08:46:27 crc kubenswrapper[4780]: I1205 08:46:27.728412 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4kqm\" (UniqueName: \"kubernetes.io/projected/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-kube-api-access-d4kqm\") pod \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\" (UID: \"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7\") " Dec 05 08:46:27 crc kubenswrapper[4780]: I1205 08:46:27.734083 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-kube-api-access-d4kqm" (OuterVolumeSpecName: "kube-api-access-d4kqm") pod "9a5eaea9-50ed-4f64-8c7e-4f92de056ed7" (UID: "9a5eaea9-50ed-4f64-8c7e-4f92de056ed7"). InnerVolumeSpecName "kube-api-access-d4kqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:46:27 crc kubenswrapper[4780]: I1205 08:46:27.759462 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9a5eaea9-50ed-4f64-8c7e-4f92de056ed7" (UID: "9a5eaea9-50ed-4f64-8c7e-4f92de056ed7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:46:27 crc kubenswrapper[4780]: I1205 08:46:27.766308 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-inventory" (OuterVolumeSpecName: "inventory") pod "9a5eaea9-50ed-4f64-8c7e-4f92de056ed7" (UID: "9a5eaea9-50ed-4f64-8c7e-4f92de056ed7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:46:27 crc kubenswrapper[4780]: I1205 08:46:27.830668 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4kqm\" (UniqueName: \"kubernetes.io/projected/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-kube-api-access-d4kqm\") on node \"crc\" DevicePath \"\"" Dec 05 08:46:27 crc kubenswrapper[4780]: I1205 08:46:27.830702 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:46:27 crc kubenswrapper[4780]: I1205 08:46:27.830711 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a5eaea9-50ed-4f64-8c7e-4f92de056ed7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.141279 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-z89qw" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.159679 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-z89qw" event={"ID":"9a5eaea9-50ed-4f64-8c7e-4f92de056ed7","Type":"ContainerDied","Data":"fddb7503f0856614112daaf568aa14987c3bf66e488dbbd71fcf009661eb1152"} Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.159755 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fddb7503f0856614112daaf568aa14987c3bf66e488dbbd71fcf009661eb1152" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.234497 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-76cmr"] Dec 05 08:46:28 crc kubenswrapper[4780]: E1205 08:46:28.235039 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5eaea9-50ed-4f64-8c7e-4f92de056ed7" containerName="install-os-openstack-openstack-cell1" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.235061 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5eaea9-50ed-4f64-8c7e-4f92de056ed7" containerName="install-os-openstack-openstack-cell1" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.235329 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5eaea9-50ed-4f64-8c7e-4f92de056ed7" containerName="install-os-openstack-openstack-cell1" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.236179 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.239002 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.239123 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.241758 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.247227 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-76cmr"] Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.247508 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.341971 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skp7f\" (UniqueName: \"kubernetes.io/projected/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-kube-api-access-skp7f\") pod \"configure-os-openstack-openstack-cell1-76cmr\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.342080 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-ssh-key\") pod \"configure-os-openstack-openstack-cell1-76cmr\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.342138 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-inventory\") pod \"configure-os-openstack-openstack-cell1-76cmr\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.444198 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skp7f\" (UniqueName: \"kubernetes.io/projected/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-kube-api-access-skp7f\") pod \"configure-os-openstack-openstack-cell1-76cmr\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.444314 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-ssh-key\") pod \"configure-os-openstack-openstack-cell1-76cmr\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.444372 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-inventory\") pod \"configure-os-openstack-openstack-cell1-76cmr\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.451255 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-ssh-key\") pod \"configure-os-openstack-openstack-cell1-76cmr\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.456085 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-inventory\") pod \"configure-os-openstack-openstack-cell1-76cmr\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.461717 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skp7f\" (UniqueName: \"kubernetes.io/projected/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-kube-api-access-skp7f\") pod \"configure-os-openstack-openstack-cell1-76cmr\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:28 crc kubenswrapper[4780]: I1205 08:46:28.577925 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:46:29 crc kubenswrapper[4780]: I1205 08:46:29.090829 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-76cmr"] Dec 05 08:46:29 crc kubenswrapper[4780]: I1205 08:46:29.150529 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-76cmr" event={"ID":"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5","Type":"ContainerStarted","Data":"2b2581e1a2975744a184b38b3d6a21d8b341fef2a2d91883ade573f2e85997bf"} Dec 05 08:46:29 crc kubenswrapper[4780]: I1205 08:46:29.908107 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:46:29 crc kubenswrapper[4780]: I1205 08:46:29.908404 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:46:29 crc kubenswrapper[4780]: I1205 08:46:29.908470 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 08:46:29 crc kubenswrapper[4780]: I1205 08:46:29.911595 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1cd623fd8841f50a04258d5eda65d0ec55ba5d98c7293c90f5681bc480249024"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 08:46:29 crc kubenswrapper[4780]: I1205 08:46:29.911688 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://1cd623fd8841f50a04258d5eda65d0ec55ba5d98c7293c90f5681bc480249024" gracePeriod=600 Dec 05 08:46:30 crc kubenswrapper[4780]: I1205 08:46:30.160011 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-76cmr" event={"ID":"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5","Type":"ContainerStarted","Data":"10ac6f70542803a696f9fe137fb7fe081c703281afc0111ccb490c271ea7fd4e"} Dec 05 08:46:30 crc kubenswrapper[4780]: I1205 08:46:30.164747 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="1cd623fd8841f50a04258d5eda65d0ec55ba5d98c7293c90f5681bc480249024" exitCode=0 Dec 05 08:46:30 crc kubenswrapper[4780]: I1205 08:46:30.164818 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"1cd623fd8841f50a04258d5eda65d0ec55ba5d98c7293c90f5681bc480249024"} Dec 05 08:46:30 crc kubenswrapper[4780]: I1205 08:46:30.165184 4780 scope.go:117] "RemoveContainer" containerID="617077f42dc398d473910e17eb784db4cdd2ba6e0c4d6a3720a6610a1de5d533" Dec 05 08:46:30 crc kubenswrapper[4780]: I1205 08:46:30.190314 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-76cmr" podStartSLOduration=1.681669296 podStartE2EDuration="2.190295215s" podCreationTimestamp="2025-12-05 08:46:28 +0000 UTC" firstStartedPulling="2025-12-05 08:46:29.094720208 +0000 UTC m=+7223.164236540" lastFinishedPulling="2025-12-05 08:46:29.603346127 +0000 UTC m=+7223.672862459" observedRunningTime="2025-12-05 08:46:30.178569947 +0000 UTC m=+7224.248086289" watchObservedRunningTime="2025-12-05 08:46:30.190295215 +0000 UTC m=+7224.259811547" Dec 05 08:46:31 crc kubenswrapper[4780]: I1205 08:46:31.177704 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256"} Dec 05 08:47:10 crc kubenswrapper[4780]: I1205 08:47:10.587164 4780 generic.go:334] "Generic (PLEG): container finished" podID="0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5" containerID="10ac6f70542803a696f9fe137fb7fe081c703281afc0111ccb490c271ea7fd4e" exitCode=0 Dec 05 08:47:10 crc kubenswrapper[4780]: I1205 08:47:10.587595 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-76cmr" event={"ID":"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5","Type":"ContainerDied","Data":"10ac6f70542803a696f9fe137fb7fe081c703281afc0111ccb490c271ea7fd4e"} Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.037627 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.177608 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skp7f\" (UniqueName: \"kubernetes.io/projected/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-kube-api-access-skp7f\") pod \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.178067 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-ssh-key\") pod \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.178104 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-inventory\") pod \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\" (UID: \"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5\") " Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.184427 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-kube-api-access-skp7f" (OuterVolumeSpecName: "kube-api-access-skp7f") pod "0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5" (UID: "0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5"). InnerVolumeSpecName "kube-api-access-skp7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.210096 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-inventory" (OuterVolumeSpecName: "inventory") pod "0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5" (UID: "0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.214342 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5" (UID: "0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.282061 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.282158 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.282173 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skp7f\" (UniqueName: \"kubernetes.io/projected/0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5-kube-api-access-skp7f\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.614092 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-76cmr" event={"ID":"0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5","Type":"ContainerDied","Data":"2b2581e1a2975744a184b38b3d6a21d8b341fef2a2d91883ade573f2e85997bf"} Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.614149 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b2581e1a2975744a184b38b3d6a21d8b341fef2a2d91883ade573f2e85997bf" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.614207 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-76cmr" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.721952 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-zvpbm"] Dec 05 08:47:12 crc kubenswrapper[4780]: E1205 08:47:12.722393 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5" containerName="configure-os-openstack-openstack-cell1" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.722411 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5" containerName="configure-os-openstack-openstack-cell1" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.722634 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5" containerName="configure-os-openstack-openstack-cell1" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.723343 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.726171 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.726474 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.727043 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.729278 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.732683 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-zvpbm"] Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.893907 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2t6x\" (UniqueName: \"kubernetes.io/projected/0bb77d44-7251-43b1-864b-aa98ab803837-kube-api-access-s2t6x\") pod \"ssh-known-hosts-openstack-zvpbm\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.893960 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-zvpbm\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.894152 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-inventory-0\") pod \"ssh-known-hosts-openstack-zvpbm\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.996285 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2t6x\" (UniqueName: \"kubernetes.io/projected/0bb77d44-7251-43b1-864b-aa98ab803837-kube-api-access-s2t6x\") pod \"ssh-known-hosts-openstack-zvpbm\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.996348 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-zvpbm\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:12 crc kubenswrapper[4780]: I1205 08:47:12.996424 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-inventory-0\") pod \"ssh-known-hosts-openstack-zvpbm\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:13 crc kubenswrapper[4780]: I1205 08:47:13.000724 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-zvpbm\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:13 crc kubenswrapper[4780]: I1205 08:47:13.000864 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-inventory-0\") pod \"ssh-known-hosts-openstack-zvpbm\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:13 crc kubenswrapper[4780]: I1205 08:47:13.014757 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2t6x\" (UniqueName: \"kubernetes.io/projected/0bb77d44-7251-43b1-864b-aa98ab803837-kube-api-access-s2t6x\") pod \"ssh-known-hosts-openstack-zvpbm\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:13 crc kubenswrapper[4780]: I1205 08:47:13.041231 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:13 crc kubenswrapper[4780]: I1205 08:47:13.597745 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-zvpbm"] Dec 05 08:47:13 crc kubenswrapper[4780]: I1205 08:47:13.627622 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-zvpbm" event={"ID":"0bb77d44-7251-43b1-864b-aa98ab803837","Type":"ContainerStarted","Data":"2b28621e3ab6f159ab4e16a10a1ac8c9b79f094971290b5f08b9d7776b7f090b"} Dec 05 08:47:14 crc kubenswrapper[4780]: I1205 08:47:14.648119 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-zvpbm" event={"ID":"0bb77d44-7251-43b1-864b-aa98ab803837","Type":"ContainerStarted","Data":"22f55495309d662d41a9a357a6123bdc1eaec8ae0f57981bf59ffa251d052abe"} Dec 05 08:47:14 crc kubenswrapper[4780]: I1205 08:47:14.668368 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-zvpbm" podStartSLOduration=2.246520323 podStartE2EDuration="2.668331791s" podCreationTimestamp="2025-12-05 08:47:12 +0000 UTC" firstStartedPulling="2025-12-05 08:47:13.607587759 +0000 UTC m=+7267.677104091" lastFinishedPulling="2025-12-05 08:47:14.029399227 +0000 UTC m=+7268.098915559" observedRunningTime="2025-12-05 08:47:14.667257813 +0000 UTC m=+7268.736774145" watchObservedRunningTime="2025-12-05 08:47:14.668331791 +0000 UTC m=+7268.737848123" Dec 05 08:47:22 crc kubenswrapper[4780]: I1205 08:47:22.722521 4780 generic.go:334] "Generic (PLEG): container finished" podID="0bb77d44-7251-43b1-864b-aa98ab803837" containerID="22f55495309d662d41a9a357a6123bdc1eaec8ae0f57981bf59ffa251d052abe" exitCode=0 Dec 05 08:47:22 crc kubenswrapper[4780]: I1205 08:47:22.722588 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-zvpbm" event={"ID":"0bb77d44-7251-43b1-864b-aa98ab803837","Type":"ContainerDied","Data":"22f55495309d662d41a9a357a6123bdc1eaec8ae0f57981bf59ffa251d052abe"} Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.210060 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.339839 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-inventory-0\") pod \"0bb77d44-7251-43b1-864b-aa98ab803837\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.340024 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2t6x\" (UniqueName: \"kubernetes.io/projected/0bb77d44-7251-43b1-864b-aa98ab803837-kube-api-access-s2t6x\") pod \"0bb77d44-7251-43b1-864b-aa98ab803837\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.340079 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-ssh-key-openstack-cell1\") pod \"0bb77d44-7251-43b1-864b-aa98ab803837\" (UID: \"0bb77d44-7251-43b1-864b-aa98ab803837\") " Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.346484 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb77d44-7251-43b1-864b-aa98ab803837-kube-api-access-s2t6x" (OuterVolumeSpecName: "kube-api-access-s2t6x") pod "0bb77d44-7251-43b1-864b-aa98ab803837" (UID: "0bb77d44-7251-43b1-864b-aa98ab803837"). InnerVolumeSpecName "kube-api-access-s2t6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.377163 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "0bb77d44-7251-43b1-864b-aa98ab803837" (UID: "0bb77d44-7251-43b1-864b-aa98ab803837"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.377574 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "0bb77d44-7251-43b1-864b-aa98ab803837" (UID: "0bb77d44-7251-43b1-864b-aa98ab803837"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.442198 4780 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.442233 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2t6x\" (UniqueName: \"kubernetes.io/projected/0bb77d44-7251-43b1-864b-aa98ab803837-kube-api-access-s2t6x\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.442245 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0bb77d44-7251-43b1-864b-aa98ab803837-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.751148 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-zvpbm" event={"ID":"0bb77d44-7251-43b1-864b-aa98ab803837","Type":"ContainerDied","Data":"2b28621e3ab6f159ab4e16a10a1ac8c9b79f094971290b5f08b9d7776b7f090b"} Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.751466 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b28621e3ab6f159ab4e16a10a1ac8c9b79f094971290b5f08b9d7776b7f090b" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.751433 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-zvpbm" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.823407 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-nv2qj"] Dec 05 08:47:24 crc kubenswrapper[4780]: E1205 08:47:24.824218 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb77d44-7251-43b1-864b-aa98ab803837" containerName="ssh-known-hosts-openstack" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.824249 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb77d44-7251-43b1-864b-aa98ab803837" containerName="ssh-known-hosts-openstack" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.824541 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb77d44-7251-43b1-864b-aa98ab803837" containerName="ssh-known-hosts-openstack" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.825323 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.828601 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.828642 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.828834 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.828874 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.839521 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-nv2qj"] Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.953418 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-inventory\") pod \"run-os-openstack-openstack-cell1-nv2qj\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.953474 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-ssh-key\") pod \"run-os-openstack-openstack-cell1-nv2qj\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:24 crc kubenswrapper[4780]: I1205 08:47:24.953690 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrhw\" (UniqueName: \"kubernetes.io/projected/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-kube-api-access-ckrhw\") pod \"run-os-openstack-openstack-cell1-nv2qj\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:25 crc kubenswrapper[4780]: I1205 08:47:25.055766 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckrhw\" (UniqueName: \"kubernetes.io/projected/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-kube-api-access-ckrhw\") pod \"run-os-openstack-openstack-cell1-nv2qj\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:25 crc kubenswrapper[4780]: I1205 08:47:25.055905 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-inventory\") pod \"run-os-openstack-openstack-cell1-nv2qj\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:25 crc kubenswrapper[4780]: I1205 08:47:25.055934 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-ssh-key\") pod \"run-os-openstack-openstack-cell1-nv2qj\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:25 crc kubenswrapper[4780]: I1205 08:47:25.060396 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-inventory\") pod \"run-os-openstack-openstack-cell1-nv2qj\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:25 crc kubenswrapper[4780]: I1205 08:47:25.060394 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-ssh-key\") pod \"run-os-openstack-openstack-cell1-nv2qj\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:25 crc kubenswrapper[4780]: I1205 08:47:25.071779 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckrhw\" (UniqueName: \"kubernetes.io/projected/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-kube-api-access-ckrhw\") pod \"run-os-openstack-openstack-cell1-nv2qj\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:25 crc kubenswrapper[4780]: I1205 08:47:25.156598 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:25 crc kubenswrapper[4780]: I1205 08:47:25.638448 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-nv2qj"] Dec 05 08:47:25 crc kubenswrapper[4780]: I1205 08:47:25.761406 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-nv2qj" event={"ID":"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6","Type":"ContainerStarted","Data":"cbfab3dbc1307de59b08e2da971d805a16b1e1eb86769ea67ca61e59565b7f75"} Dec 05 08:47:26 crc kubenswrapper[4780]: I1205 08:47:26.771718 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-nv2qj" event={"ID":"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6","Type":"ContainerStarted","Data":"38ebc327a76a1ca1f7a6ba11c4548e80a86b703c8456e72a591c3a4ae1e8eb8a"} Dec 05 08:47:26 crc kubenswrapper[4780]: I1205 08:47:26.800794 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-nv2qj" podStartSLOduration=2.379357909 podStartE2EDuration="2.800773926s" podCreationTimestamp="2025-12-05 08:47:24 +0000 UTC" firstStartedPulling="2025-12-05 08:47:25.654810715 +0000 UTC m=+7279.724327047" lastFinishedPulling="2025-12-05 08:47:26.076226732 +0000 UTC m=+7280.145743064" observedRunningTime="2025-12-05 08:47:26.788522894 +0000 UTC m=+7280.858039236" watchObservedRunningTime="2025-12-05 08:47:26.800773926 +0000 UTC m=+7280.870290258" Dec 05 08:47:33 crc kubenswrapper[4780]: I1205 08:47:33.834367 4780 generic.go:334] "Generic (PLEG): container finished" podID="34e20bd1-a448-4d66-b6eb-8d7b2f7108c6" containerID="38ebc327a76a1ca1f7a6ba11c4548e80a86b703c8456e72a591c3a4ae1e8eb8a" exitCode=0 Dec 05 08:47:33 crc kubenswrapper[4780]: I1205 08:47:33.834489 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-nv2qj" event={"ID":"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6","Type":"ContainerDied","Data":"38ebc327a76a1ca1f7a6ba11c4548e80a86b703c8456e72a591c3a4ae1e8eb8a"} Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.249653 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.402321 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-inventory\") pod \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.402465 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckrhw\" (UniqueName: \"kubernetes.io/projected/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-kube-api-access-ckrhw\") pod \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.402582 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-ssh-key\") pod \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\" (UID: \"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6\") " Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.407988 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-kube-api-access-ckrhw" (OuterVolumeSpecName: "kube-api-access-ckrhw") pod "34e20bd1-a448-4d66-b6eb-8d7b2f7108c6" (UID: "34e20bd1-a448-4d66-b6eb-8d7b2f7108c6"). InnerVolumeSpecName "kube-api-access-ckrhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.434006 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "34e20bd1-a448-4d66-b6eb-8d7b2f7108c6" (UID: "34e20bd1-a448-4d66-b6eb-8d7b2f7108c6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.434132 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-inventory" (OuterVolumeSpecName: "inventory") pod "34e20bd1-a448-4d66-b6eb-8d7b2f7108c6" (UID: "34e20bd1-a448-4d66-b6eb-8d7b2f7108c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.504774 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckrhw\" (UniqueName: \"kubernetes.io/projected/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-kube-api-access-ckrhw\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.504812 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.504822 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34e20bd1-a448-4d66-b6eb-8d7b2f7108c6-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.851659 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-nv2qj" event={"ID":"34e20bd1-a448-4d66-b6eb-8d7b2f7108c6","Type":"ContainerDied","Data":"cbfab3dbc1307de59b08e2da971d805a16b1e1eb86769ea67ca61e59565b7f75"} Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.851697 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbfab3dbc1307de59b08e2da971d805a16b1e1eb86769ea67ca61e59565b7f75" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.852079 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-nv2qj" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.931254 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-hkz6x"] Dec 05 08:47:35 crc kubenswrapper[4780]: E1205 08:47:35.932147 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e20bd1-a448-4d66-b6eb-8d7b2f7108c6" containerName="run-os-openstack-openstack-cell1" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.932201 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e20bd1-a448-4d66-b6eb-8d7b2f7108c6" containerName="run-os-openstack-openstack-cell1" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.932419 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e20bd1-a448-4d66-b6eb-8d7b2f7108c6" containerName="run-os-openstack-openstack-cell1" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.933128 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.935112 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.935277 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.939217 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.939413 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:47:35 crc kubenswrapper[4780]: I1205 08:47:35.940968 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-hkz6x"] Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.117184 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-hkz6x\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.117259 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-inventory\") pod \"reboot-os-openstack-openstack-cell1-hkz6x\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.117469 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zccl2\" (UniqueName: \"kubernetes.io/projected/942cc838-8886-4800-b03e-5e286e6700c0-kube-api-access-zccl2\") pod \"reboot-os-openstack-openstack-cell1-hkz6x\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.219762 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-inventory\") pod \"reboot-os-openstack-openstack-cell1-hkz6x\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.220075 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zccl2\" (UniqueName: \"kubernetes.io/projected/942cc838-8886-4800-b03e-5e286e6700c0-kube-api-access-zccl2\") pod \"reboot-os-openstack-openstack-cell1-hkz6x\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.220351 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-hkz6x\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.226612 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-hkz6x\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.233941 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-inventory\") pod \"reboot-os-openstack-openstack-cell1-hkz6x\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.236648 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zccl2\" (UniqueName: \"kubernetes.io/projected/942cc838-8886-4800-b03e-5e286e6700c0-kube-api-access-zccl2\") pod \"reboot-os-openstack-openstack-cell1-hkz6x\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.258056 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.742083 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-hkz6x"] Dec 05 08:47:36 crc kubenswrapper[4780]: I1205 08:47:36.868781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" event={"ID":"942cc838-8886-4800-b03e-5e286e6700c0","Type":"ContainerStarted","Data":"ad224fe49102f224a974a29231828cdd17bb347b4db882c551deba18d91d90ed"} Dec 05 08:47:37 crc kubenswrapper[4780]: I1205 08:47:37.884708 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" event={"ID":"942cc838-8886-4800-b03e-5e286e6700c0","Type":"ContainerStarted","Data":"8865148e6dfa3dac3b149e389d2494d1976b3fac1d12c46fe82ca1ab6857d386"} Dec 05 08:47:37 crc kubenswrapper[4780]: I1205 08:47:37.909676 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" podStartSLOduration=2.398607529 podStartE2EDuration="2.909655732s" podCreationTimestamp="2025-12-05 08:47:35 +0000 UTC" firstStartedPulling="2025-12-05 08:47:36.747053231 +0000 UTC m=+7290.816569563" lastFinishedPulling="2025-12-05 08:47:37.258101434 +0000 UTC m=+7291.327617766" observedRunningTime="2025-12-05 08:47:37.89999515 +0000 UTC m=+7291.969511482" watchObservedRunningTime="2025-12-05 08:47:37.909655732 +0000 UTC m=+7291.979172064" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.028161 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nlj2t"] Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.031099 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.050709 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlj2t"] Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.231220 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-catalog-content\") pod \"community-operators-nlj2t\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.231790 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhcch\" (UniqueName: \"kubernetes.io/projected/3bb92809-0e67-4b2f-ae25-7ac836bc9551-kube-api-access-zhcch\") pod \"community-operators-nlj2t\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.231873 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-utilities\") pod \"community-operators-nlj2t\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.333409 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhcch\" (UniqueName: \"kubernetes.io/projected/3bb92809-0e67-4b2f-ae25-7ac836bc9551-kube-api-access-zhcch\") pod \"community-operators-nlj2t\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.333491 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-utilities\") pod \"community-operators-nlj2t\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.333632 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-catalog-content\") pod \"community-operators-nlj2t\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.334205 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-catalog-content\") pod \"community-operators-nlj2t\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.334619 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-utilities\") pod \"community-operators-nlj2t\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.358780 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhcch\" (UniqueName: \"kubernetes.io/projected/3bb92809-0e67-4b2f-ae25-7ac836bc9551-kube-api-access-zhcch\") pod \"community-operators-nlj2t\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:46 crc kubenswrapper[4780]: I1205 08:47:46.652626 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:47 crc kubenswrapper[4780]: I1205 08:47:47.118468 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlj2t"] Dec 05 08:47:47 crc kubenswrapper[4780]: I1205 08:47:47.979238 4780 generic.go:334] "Generic (PLEG): container finished" podID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerID="546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5" exitCode=0 Dec 05 08:47:47 crc kubenswrapper[4780]: I1205 08:47:47.979283 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlj2t" event={"ID":"3bb92809-0e67-4b2f-ae25-7ac836bc9551","Type":"ContainerDied","Data":"546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5"} Dec 05 08:47:47 crc kubenswrapper[4780]: I1205 08:47:47.979565 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlj2t" event={"ID":"3bb92809-0e67-4b2f-ae25-7ac836bc9551","Type":"ContainerStarted","Data":"7855e121c1c82f78dd1ce6764f8451aeee06ce4cbc9b1049120843329be96b99"} Dec 05 08:47:48 crc kubenswrapper[4780]: I1205 08:47:48.993538 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlj2t" event={"ID":"3bb92809-0e67-4b2f-ae25-7ac836bc9551","Type":"ContainerStarted","Data":"a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155"} Dec 05 08:47:50 crc kubenswrapper[4780]: I1205 08:47:50.002913 4780 generic.go:334] "Generic (PLEG): container finished" podID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerID="a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155" exitCode=0 Dec 05 08:47:50 crc kubenswrapper[4780]: I1205 08:47:50.002992 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlj2t" event={"ID":"3bb92809-0e67-4b2f-ae25-7ac836bc9551","Type":"ContainerDied","Data":"a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155"} Dec 05 08:47:50 crc kubenswrapper[4780]: I1205 08:47:50.799309 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9nr"] Dec 05 08:47:50 crc kubenswrapper[4780]: I1205 08:47:50.805392 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:50 crc kubenswrapper[4780]: I1205 08:47:50.812839 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9nr"] Dec 05 08:47:50 crc kubenswrapper[4780]: I1205 08:47:50.931393 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-utilities\") pod \"redhat-marketplace-9g9nr\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:50 crc kubenswrapper[4780]: I1205 08:47:50.931478 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-catalog-content\") pod \"redhat-marketplace-9g9nr\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:50 crc kubenswrapper[4780]: I1205 08:47:50.931547 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lkzw\" (UniqueName: \"kubernetes.io/projected/64227ea0-6ec7-4177-a844-f8596789e125-kube-api-access-6lkzw\") pod \"redhat-marketplace-9g9nr\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:51 crc kubenswrapper[4780]: I1205 08:47:51.015972 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlj2t" event={"ID":"3bb92809-0e67-4b2f-ae25-7ac836bc9551","Type":"ContainerStarted","Data":"de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0"} Dec 05 08:47:51 crc kubenswrapper[4780]: I1205 08:47:51.031419 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nlj2t" podStartSLOduration=2.561069986 podStartE2EDuration="5.031401965s" podCreationTimestamp="2025-12-05 08:47:46 +0000 UTC" firstStartedPulling="2025-12-05 08:47:47.980867461 +0000 UTC m=+7302.050383793" lastFinishedPulling="2025-12-05 08:47:50.45119944 +0000 UTC m=+7304.520715772" observedRunningTime="2025-12-05 08:47:51.03082214 +0000 UTC m=+7305.100338482" watchObservedRunningTime="2025-12-05 08:47:51.031401965 +0000 UTC m=+7305.100918297" Dec 05 08:47:51 crc kubenswrapper[4780]: I1205 08:47:51.033179 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-utilities\") pod \"redhat-marketplace-9g9nr\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:51 crc kubenswrapper[4780]: I1205 08:47:51.033270 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-catalog-content\") pod \"redhat-marketplace-9g9nr\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:51 crc kubenswrapper[4780]: I1205 08:47:51.033338 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lkzw\" (UniqueName: \"kubernetes.io/projected/64227ea0-6ec7-4177-a844-f8596789e125-kube-api-access-6lkzw\") pod \"redhat-marketplace-9g9nr\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:51 crc kubenswrapper[4780]: I1205 08:47:51.033675 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-utilities\") pod \"redhat-marketplace-9g9nr\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:51 crc kubenswrapper[4780]: I1205 08:47:51.033716 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-catalog-content\") pod \"redhat-marketplace-9g9nr\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:51 crc kubenswrapper[4780]: I1205 08:47:51.060661 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lkzw\" (UniqueName: \"kubernetes.io/projected/64227ea0-6ec7-4177-a844-f8596789e125-kube-api-access-6lkzw\") pod \"redhat-marketplace-9g9nr\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:51 crc kubenswrapper[4780]: I1205 08:47:51.134475 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:47:51 crc kubenswrapper[4780]: I1205 08:47:51.651519 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9nr"] Dec 05 08:47:51 crc kubenswrapper[4780]: W1205 08:47:51.652034 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64227ea0_6ec7_4177_a844_f8596789e125.slice/crio-0a3e242564fad41e74d5a06bf121f08731c6971f4dc1804132e8f24ac15b5244 WatchSource:0}: Error finding container 0a3e242564fad41e74d5a06bf121f08731c6971f4dc1804132e8f24ac15b5244: Status 404 returned error can't find the container with id 0a3e242564fad41e74d5a06bf121f08731c6971f4dc1804132e8f24ac15b5244 Dec 05 08:47:52 crc kubenswrapper[4780]: I1205 08:47:52.029477 4780 generic.go:334] "Generic (PLEG): container finished" podID="64227ea0-6ec7-4177-a844-f8596789e125" containerID="6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba" exitCode=0 Dec 05 08:47:52 crc kubenswrapper[4780]: I1205 08:47:52.029548 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9nr" event={"ID":"64227ea0-6ec7-4177-a844-f8596789e125","Type":"ContainerDied","Data":"6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba"} Dec 05 08:47:52 crc kubenswrapper[4780]: I1205 08:47:52.029638 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9nr" event={"ID":"64227ea0-6ec7-4177-a844-f8596789e125","Type":"ContainerStarted","Data":"0a3e242564fad41e74d5a06bf121f08731c6971f4dc1804132e8f24ac15b5244"} Dec 05 08:47:53 crc kubenswrapper[4780]: I1205 08:47:53.038945 4780 generic.go:334] "Generic (PLEG): container finished" podID="942cc838-8886-4800-b03e-5e286e6700c0" containerID="8865148e6dfa3dac3b149e389d2494d1976b3fac1d12c46fe82ca1ab6857d386" exitCode=0 Dec 05 08:47:53 crc kubenswrapper[4780]: I1205 08:47:53.039033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" event={"ID":"942cc838-8886-4800-b03e-5e286e6700c0","Type":"ContainerDied","Data":"8865148e6dfa3dac3b149e389d2494d1976b3fac1d12c46fe82ca1ab6857d386"} Dec 05 08:47:53 crc kubenswrapper[4780]: I1205 08:47:53.041993 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9nr" event={"ID":"64227ea0-6ec7-4177-a844-f8596789e125","Type":"ContainerStarted","Data":"ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60"} Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.052645 4780 generic.go:334] "Generic (PLEG): container finished" podID="64227ea0-6ec7-4177-a844-f8596789e125" containerID="ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60" exitCode=0 Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.052737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9nr" event={"ID":"64227ea0-6ec7-4177-a844-f8596789e125","Type":"ContainerDied","Data":"ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60"} Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.527846 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.713063 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-inventory\") pod \"942cc838-8886-4800-b03e-5e286e6700c0\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.713193 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-ssh-key\") pod \"942cc838-8886-4800-b03e-5e286e6700c0\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.713379 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zccl2\" (UniqueName: \"kubernetes.io/projected/942cc838-8886-4800-b03e-5e286e6700c0-kube-api-access-zccl2\") pod \"942cc838-8886-4800-b03e-5e286e6700c0\" (UID: \"942cc838-8886-4800-b03e-5e286e6700c0\") " Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.732184 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/942cc838-8886-4800-b03e-5e286e6700c0-kube-api-access-zccl2" (OuterVolumeSpecName: "kube-api-access-zccl2") pod "942cc838-8886-4800-b03e-5e286e6700c0" (UID: "942cc838-8886-4800-b03e-5e286e6700c0"). InnerVolumeSpecName "kube-api-access-zccl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.748002 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "942cc838-8886-4800-b03e-5e286e6700c0" (UID: "942cc838-8886-4800-b03e-5e286e6700c0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.752048 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-inventory" (OuterVolumeSpecName: "inventory") pod "942cc838-8886-4800-b03e-5e286e6700c0" (UID: "942cc838-8886-4800-b03e-5e286e6700c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.815510 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.815548 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zccl2\" (UniqueName: \"kubernetes.io/projected/942cc838-8886-4800-b03e-5e286e6700c0-kube-api-access-zccl2\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:54 crc kubenswrapper[4780]: I1205 08:47:54.815559 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/942cc838-8886-4800-b03e-5e286e6700c0-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.062501 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" event={"ID":"942cc838-8886-4800-b03e-5e286e6700c0","Type":"ContainerDied","Data":"ad224fe49102f224a974a29231828cdd17bb347b4db882c551deba18d91d90ed"} Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.062538 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad224fe49102f224a974a29231828cdd17bb347b4db882c551deba18d91d90ed" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.062552 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-hkz6x" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.223969 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-jhkh5"] Dec 05 08:47:55 crc kubenswrapper[4780]: E1205 08:47:55.224409 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="942cc838-8886-4800-b03e-5e286e6700c0" containerName="reboot-os-openstack-openstack-cell1" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.224427 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="942cc838-8886-4800-b03e-5e286e6700c0" containerName="reboot-os-openstack-openstack-cell1" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.224628 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="942cc838-8886-4800-b03e-5e286e6700c0" containerName="reboot-os-openstack-openstack-cell1" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.225379 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.228630 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.228813 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.228848 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.228814 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.229032 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.229064 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.229217 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.229284 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.246094 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-jhkh5"] Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327142 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327559 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327591 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327627 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327661 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327694 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327732 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ssh-key\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327759 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-inventory\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327795 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpfj6\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-kube-api-access-qpfj6\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327841 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327907 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327956 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.327988 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.328049 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.328091 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430097 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430169 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430205 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430247 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ssh-key\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430298 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-inventory\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430331 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpfj6\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-kube-api-access-qpfj6\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430370 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430413 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430451 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430482 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430547 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430588 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430648 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430676 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.430701 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.439737 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-inventory\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.443954 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.444156 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.444544 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.444923 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.445360 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ssh-key\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.445744 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.446490 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.446833 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.447259 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.447450 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.447463 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.450060 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.454088 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.454496 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpfj6\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-kube-api-access-qpfj6\") pod \"install-certs-openstack-openstack-cell1-jhkh5\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:55 crc kubenswrapper[4780]: I1205 08:47:55.583968 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:47:56 crc kubenswrapper[4780]: I1205 08:47:56.077968 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9nr" event={"ID":"64227ea0-6ec7-4177-a844-f8596789e125","Type":"ContainerStarted","Data":"4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d"} Dec 05 08:47:56 crc kubenswrapper[4780]: I1205 08:47:56.105706 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9g9nr" podStartSLOduration=3.019147511 podStartE2EDuration="6.105689782s" podCreationTimestamp="2025-12-05 08:47:50 +0000 UTC" firstStartedPulling="2025-12-05 08:47:52.031198388 +0000 UTC m=+7306.100714720" lastFinishedPulling="2025-12-05 08:47:55.117740659 +0000 UTC m=+7309.187256991" observedRunningTime="2025-12-05 08:47:56.100395128 +0000 UTC m=+7310.169911470" watchObservedRunningTime="2025-12-05 08:47:56.105689782 +0000 UTC m=+7310.175206114" Dec 05 08:47:56 crc kubenswrapper[4780]: W1205 08:47:56.181420 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ae81285_c333_416d_b78f_dceba6c3ffda.slice/crio-dd89bb3b7a50508e1359535cab205485ccac15b35a986d1d8145b55597aaa285 WatchSource:0}: Error finding container dd89bb3b7a50508e1359535cab205485ccac15b35a986d1d8145b55597aaa285: Status 404 returned error can't find the container with id dd89bb3b7a50508e1359535cab205485ccac15b35a986d1d8145b55597aaa285 Dec 05 08:47:56 crc kubenswrapper[4780]: I1205 08:47:56.183469 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-jhkh5"] Dec 05 08:47:56 crc kubenswrapper[4780]: I1205 08:47:56.653372 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:56 crc kubenswrapper[4780]: I1205 08:47:56.653914 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:56 crc kubenswrapper[4780]: I1205 08:47:56.712279 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:57 crc kubenswrapper[4780]: I1205 08:47:57.089185 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" event={"ID":"2ae81285-c333-416d-b78f-dceba6c3ffda","Type":"ContainerStarted","Data":"a00b450996e77bb323723889895d2995dd185809aeefbc14127e053dad2a124d"} Dec 05 08:47:57 crc kubenswrapper[4780]: I1205 08:47:57.089219 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" event={"ID":"2ae81285-c333-416d-b78f-dceba6c3ffda","Type":"ContainerStarted","Data":"dd89bb3b7a50508e1359535cab205485ccac15b35a986d1d8145b55597aaa285"} Dec 05 08:47:57 crc kubenswrapper[4780]: I1205 08:47:57.114446 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" podStartSLOduration=1.720486934 podStartE2EDuration="2.114421077s" podCreationTimestamp="2025-12-05 08:47:55 +0000 UTC" firstStartedPulling="2025-12-05 08:47:56.184621958 +0000 UTC m=+7310.254138290" lastFinishedPulling="2025-12-05 08:47:56.578556101 +0000 UTC m=+7310.648072433" observedRunningTime="2025-12-05 08:47:57.109356199 +0000 UTC m=+7311.178872551" watchObservedRunningTime="2025-12-05 08:47:57.114421077 +0000 UTC m=+7311.183937409" Dec 05 08:47:57 crc kubenswrapper[4780]: I1205 08:47:57.147124 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:57 crc kubenswrapper[4780]: I1205 08:47:57.591085 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlj2t"] Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.105732 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nlj2t" podUID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerName="registry-server" containerID="cri-o://de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0" gracePeriod=2 Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.563313 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.629558 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-utilities\") pod \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.629707 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-catalog-content\") pod \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.629771 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhcch\" (UniqueName: \"kubernetes.io/projected/3bb92809-0e67-4b2f-ae25-7ac836bc9551-kube-api-access-zhcch\") pod \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\" (UID: \"3bb92809-0e67-4b2f-ae25-7ac836bc9551\") " Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.635950 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb92809-0e67-4b2f-ae25-7ac836bc9551-kube-api-access-zhcch" (OuterVolumeSpecName: "kube-api-access-zhcch") pod "3bb92809-0e67-4b2f-ae25-7ac836bc9551" (UID: "3bb92809-0e67-4b2f-ae25-7ac836bc9551"). InnerVolumeSpecName "kube-api-access-zhcch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.659618 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-utilities" (OuterVolumeSpecName: "utilities") pod "3bb92809-0e67-4b2f-ae25-7ac836bc9551" (UID: "3bb92809-0e67-4b2f-ae25-7ac836bc9551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.697244 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bb92809-0e67-4b2f-ae25-7ac836bc9551" (UID: "3bb92809-0e67-4b2f-ae25-7ac836bc9551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.732575 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.732606 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bb92809-0e67-4b2f-ae25-7ac836bc9551-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:47:59 crc kubenswrapper[4780]: I1205 08:47:59.732618 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhcch\" (UniqueName: \"kubernetes.io/projected/3bb92809-0e67-4b2f-ae25-7ac836bc9551-kube-api-access-zhcch\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.128706 4780 generic.go:334] "Generic (PLEG): container finished" podID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerID="de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0" exitCode=0 Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.128751 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlj2t" event={"ID":"3bb92809-0e67-4b2f-ae25-7ac836bc9551","Type":"ContainerDied","Data":"de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0"} Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.128781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlj2t" event={"ID":"3bb92809-0e67-4b2f-ae25-7ac836bc9551","Type":"ContainerDied","Data":"7855e121c1c82f78dd1ce6764f8451aeee06ce4cbc9b1049120843329be96b99"} Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.128778 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlj2t" Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.128862 4780 scope.go:117] "RemoveContainer" containerID="de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0" Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.155756 4780 scope.go:117] "RemoveContainer" containerID="a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155" Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.164914 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlj2t"] Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.173612 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nlj2t"] Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.190703 4780 scope.go:117] "RemoveContainer" containerID="546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5" Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.222986 4780 scope.go:117] "RemoveContainer" containerID="de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0" Dec 05 08:48:00 crc kubenswrapper[4780]: E1205 08:48:00.223359 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0\": container with ID starting with de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0 not found: ID does not exist" containerID="de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0" Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.223391 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0"} err="failed to get container status \"de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0\": rpc error: code = NotFound desc = could not find container \"de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0\": container with ID starting with de8f88303e9a190f3f36b24bb9663bad2bf661949d4d342b8bdf41f6f821d7b0 not found: ID does not exist" Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.223412 4780 scope.go:117] "RemoveContainer" containerID="a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155" Dec 05 08:48:00 crc kubenswrapper[4780]: E1205 08:48:00.223603 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155\": container with ID starting with a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155 not found: ID does not exist" containerID="a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155" Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.223624 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155"} err="failed to get container status \"a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155\": rpc error: code = NotFound desc = could not find container \"a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155\": container with ID starting with a51a8da9c3b9af33ca1859a0ef6825bb71037dc0b0e5df69f9ca8248859cc155 not found: ID does not exist" Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.223641 4780 scope.go:117] "RemoveContainer" containerID="546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5" Dec 05 08:48:00 crc kubenswrapper[4780]: E1205 08:48:00.224160 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5\": container with ID starting with 546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5 not found: ID does not exist" containerID="546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5" Dec 05 08:48:00 crc kubenswrapper[4780]: I1205 08:48:00.224187 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5"} err="failed to get container status \"546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5\": rpc error: code = NotFound desc = could not find container \"546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5\": container with ID starting with 546e1a3fd88e76e27fbf6ea7cbe2019563c7bc2e733099fe343b91f3877b70d5 not found: ID does not exist" Dec 05 08:48:01 crc kubenswrapper[4780]: I1205 08:48:01.134872 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:48:01 crc kubenswrapper[4780]: I1205 08:48:01.135221 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:48:01 crc kubenswrapper[4780]: I1205 08:48:01.179052 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:48:02 crc kubenswrapper[4780]: I1205 08:48:02.150175 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" path="/var/lib/kubelet/pods/3bb92809-0e67-4b2f-ae25-7ac836bc9551/volumes" Dec 05 08:48:02 crc kubenswrapper[4780]: I1205 08:48:02.224652 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:48:02 crc kubenswrapper[4780]: I1205 08:48:02.988686 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9nr"] Dec 05 08:48:04 crc kubenswrapper[4780]: I1205 08:48:04.164916 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9g9nr" podUID="64227ea0-6ec7-4177-a844-f8596789e125" containerName="registry-server" containerID="cri-o://4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d" gracePeriod=2 Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.149712 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.175400 4780 generic.go:334] "Generic (PLEG): container finished" podID="64227ea0-6ec7-4177-a844-f8596789e125" containerID="4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d" exitCode=0 Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.175441 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9nr" event={"ID":"64227ea0-6ec7-4177-a844-f8596789e125","Type":"ContainerDied","Data":"4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d"} Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.175468 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9nr" event={"ID":"64227ea0-6ec7-4177-a844-f8596789e125","Type":"ContainerDied","Data":"0a3e242564fad41e74d5a06bf121f08731c6971f4dc1804132e8f24ac15b5244"} Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.175484 4780 scope.go:117] "RemoveContainer" containerID="4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.175620 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g9nr" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.223416 4780 scope.go:117] "RemoveContainer" containerID="ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.248490 4780 scope.go:117] "RemoveContainer" containerID="6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.259011 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-catalog-content\") pod \"64227ea0-6ec7-4177-a844-f8596789e125\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.259140 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-utilities\") pod \"64227ea0-6ec7-4177-a844-f8596789e125\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.259207 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lkzw\" (UniqueName: \"kubernetes.io/projected/64227ea0-6ec7-4177-a844-f8596789e125-kube-api-access-6lkzw\") pod \"64227ea0-6ec7-4177-a844-f8596789e125\" (UID: \"64227ea0-6ec7-4177-a844-f8596789e125\") " Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.267194 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64227ea0-6ec7-4177-a844-f8596789e125-kube-api-access-6lkzw" (OuterVolumeSpecName: "kube-api-access-6lkzw") pod "64227ea0-6ec7-4177-a844-f8596789e125" (UID: "64227ea0-6ec7-4177-a844-f8596789e125"). InnerVolumeSpecName "kube-api-access-6lkzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.268007 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-utilities" (OuterVolumeSpecName: "utilities") pod "64227ea0-6ec7-4177-a844-f8596789e125" (UID: "64227ea0-6ec7-4177-a844-f8596789e125"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.293250 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64227ea0-6ec7-4177-a844-f8596789e125" (UID: "64227ea0-6ec7-4177-a844-f8596789e125"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.341000 4780 scope.go:117] "RemoveContainer" containerID="4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d" Dec 05 08:48:05 crc kubenswrapper[4780]: E1205 08:48:05.341729 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d\": container with ID starting with 4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d not found: ID does not exist" containerID="4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.341778 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d"} err="failed to get container status \"4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d\": rpc error: code = NotFound desc = could not find container \"4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d\": container with ID starting with 4c8eff9424015ff6e6cc01b4a85cc4863a3ef0da8e8c7d80af18eb79c1f17d7d not found: ID does not exist" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.341804 4780 scope.go:117] "RemoveContainer" containerID="ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60" Dec 05 08:48:05 crc kubenswrapper[4780]: E1205 08:48:05.342246 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60\": container with ID starting with ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60 not found: ID does not exist" containerID="ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.342291 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60"} err="failed to get container status \"ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60\": rpc error: code = NotFound desc = could not find container \"ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60\": container with ID starting with ac2a623ef286a96b4895973b259217598f687faf2e078e2fb3332d436ba2be60 not found: ID does not exist" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.342320 4780 scope.go:117] "RemoveContainer" containerID="6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba" Dec 05 08:48:05 crc kubenswrapper[4780]: E1205 08:48:05.342661 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba\": container with ID starting with 6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba not found: ID does not exist" containerID="6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.342789 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba"} err="failed to get container status \"6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba\": rpc error: code = NotFound desc = could not find container \"6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba\": container with ID starting with 6ad0121d35d0739e6d3856a11180c6cb86a414c675f8b5d8d272e9f9899314ba not found: ID does not exist" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.361155 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.361189 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lkzw\" (UniqueName: \"kubernetes.io/projected/64227ea0-6ec7-4177-a844-f8596789e125-kube-api-access-6lkzw\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.361202 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64227ea0-6ec7-4177-a844-f8596789e125-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.506972 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9nr"] Dec 05 08:48:05 crc kubenswrapper[4780]: I1205 08:48:05.515509 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9nr"] Dec 05 08:48:06 crc kubenswrapper[4780]: I1205 08:48:06.151789 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64227ea0-6ec7-4177-a844-f8596789e125" path="/var/lib/kubelet/pods/64227ea0-6ec7-4177-a844-f8596789e125/volumes" Dec 05 08:48:33 crc kubenswrapper[4780]: I1205 08:48:33.417141 4780 generic.go:334] "Generic (PLEG): container finished" podID="2ae81285-c333-416d-b78f-dceba6c3ffda" containerID="a00b450996e77bb323723889895d2995dd185809aeefbc14127e053dad2a124d" exitCode=0 Dec 05 08:48:33 crc kubenswrapper[4780]: I1205 08:48:33.417189 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" event={"ID":"2ae81285-c333-416d-b78f-dceba6c3ffda","Type":"ContainerDied","Data":"a00b450996e77bb323723889895d2995dd185809aeefbc14127e053dad2a124d"} Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.864625 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.983528 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-metadata-combined-ca-bundle\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.983623 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-nova-combined-ca-bundle\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.983680 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ssh-key\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.983712 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-neutron-metadata-default-certs-0\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.983762 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-dhcp-combined-ca-bundle\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.983813 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-telemetry-default-certs-0\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.983862 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpfj6\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-kube-api-access-qpfj6\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.983926 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ovn-combined-ca-bundle\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.983970 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-libvirt-default-certs-0\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.983996 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-ovn-default-certs-0\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.984030 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-sriov-combined-ca-bundle\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.984065 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-bootstrap-combined-ca-bundle\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.984108 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-libvirt-combined-ca-bundle\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.984158 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-inventory\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.984194 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-telemetry-combined-ca-bundle\") pod \"2ae81285-c333-416d-b78f-dceba6c3ffda\" (UID: \"2ae81285-c333-416d-b78f-dceba6c3ffda\") " Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.991884 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.992186 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.993815 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.994063 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.994091 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.994599 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.995043 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-kube-api-access-qpfj6" (OuterVolumeSpecName: "kube-api-access-qpfj6") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "kube-api-access-qpfj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.998079 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.998170 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.998194 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:48:34 crc kubenswrapper[4780]: I1205 08:48:34.998772 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.003756 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.004126 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.023784 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-inventory" (OuterVolumeSpecName: "inventory") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.026491 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2ae81285-c333-416d-b78f-dceba6c3ffda" (UID: "2ae81285-c333-416d-b78f-dceba6c3ffda"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086720 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086768 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086786 4780 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086799 4780 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086813 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086823 4780 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086832 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086841 4780 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086850 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086859 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086869 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086877 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086934 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpfj6\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-kube-api-access-qpfj6\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086944 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae81285-c333-416d-b78f-dceba6c3ffda-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.086953 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2ae81285-c333-416d-b78f-dceba6c3ffda-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.435935 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" event={"ID":"2ae81285-c333-416d-b78f-dceba6c3ffda","Type":"ContainerDied","Data":"dd89bb3b7a50508e1359535cab205485ccac15b35a986d1d8145b55597aaa285"} Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.436204 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd89bb3b7a50508e1359535cab205485ccac15b35a986d1d8145b55597aaa285" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.435994 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-jhkh5" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.556890 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-8j825"] Dec 05 08:48:35 crc kubenswrapper[4780]: E1205 08:48:35.557331 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae81285-c333-416d-b78f-dceba6c3ffda" containerName="install-certs-openstack-openstack-cell1" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.557345 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae81285-c333-416d-b78f-dceba6c3ffda" containerName="install-certs-openstack-openstack-cell1" Dec 05 08:48:35 crc kubenswrapper[4780]: E1205 08:48:35.557358 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerName="extract-utilities" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.557365 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerName="extract-utilities" Dec 05 08:48:35 crc kubenswrapper[4780]: E1205 08:48:35.557374 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64227ea0-6ec7-4177-a844-f8596789e125" containerName="extract-content" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.557380 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="64227ea0-6ec7-4177-a844-f8596789e125" containerName="extract-content" Dec 05 08:48:35 crc kubenswrapper[4780]: E1205 08:48:35.557391 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerName="extract-content" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.557398 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerName="extract-content" Dec 05 08:48:35 crc kubenswrapper[4780]: E1205 08:48:35.557411 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64227ea0-6ec7-4177-a844-f8596789e125" containerName="registry-server" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.557418 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="64227ea0-6ec7-4177-a844-f8596789e125" containerName="registry-server" Dec 05 08:48:35 crc kubenswrapper[4780]: E1205 08:48:35.557434 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64227ea0-6ec7-4177-a844-f8596789e125" containerName="extract-utilities" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.557440 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="64227ea0-6ec7-4177-a844-f8596789e125" containerName="extract-utilities" Dec 05 08:48:35 crc kubenswrapper[4780]: E1205 08:48:35.557459 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerName="registry-server" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.557465 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerName="registry-server" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.557643 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae81285-c333-416d-b78f-dceba6c3ffda" containerName="install-certs-openstack-openstack-cell1" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.557654 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="64227ea0-6ec7-4177-a844-f8596789e125" containerName="registry-server" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.557682 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb92809-0e67-4b2f-ae25-7ac836bc9551" containerName="registry-server" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.558564 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.561496 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.561503 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.561555 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.561601 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.561998 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.584271 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-8j825"] Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.603686 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8jbp\" (UniqueName: \"kubernetes.io/projected/4406be2e-554b-4325-90c8-1c2764436e70-kube-api-access-r8jbp\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.604011 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4406be2e-554b-4325-90c8-1c2764436e70-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.604153 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ssh-key\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.604276 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-inventory\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.604511 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.706613 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-inventory\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.706745 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.706802 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8jbp\" (UniqueName: \"kubernetes.io/projected/4406be2e-554b-4325-90c8-1c2764436e70-kube-api-access-r8jbp\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.706847 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4406be2e-554b-4325-90c8-1c2764436e70-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.706924 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ssh-key\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.708278 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4406be2e-554b-4325-90c8-1c2764436e70-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.714238 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-inventory\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.716280 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ssh-key\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.716585 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.723739 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8jbp\" (UniqueName: \"kubernetes.io/projected/4406be2e-554b-4325-90c8-1c2764436e70-kube-api-access-r8jbp\") pod \"ovn-openstack-openstack-cell1-8j825\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:35 crc kubenswrapper[4780]: I1205 08:48:35.882635 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:48:36 crc kubenswrapper[4780]: I1205 08:48:36.391698 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-8j825"] Dec 05 08:48:36 crc kubenswrapper[4780]: I1205 08:48:36.446461 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-8j825" event={"ID":"4406be2e-554b-4325-90c8-1c2764436e70","Type":"ContainerStarted","Data":"a5e3fd488d2b1653a482237c00429ea88b27d88d708212f3162bcc383ca8c868"} Dec 05 08:48:37 crc kubenswrapper[4780]: I1205 08:48:37.524646 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-8j825" event={"ID":"4406be2e-554b-4325-90c8-1c2764436e70","Type":"ContainerStarted","Data":"931cdd204598a55fd3839023b89ce459c5cc877276311a6e8024cb705a3c19f8"} Dec 05 08:48:37 crc kubenswrapper[4780]: I1205 08:48:37.584138 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-8j825" podStartSLOduration=2.172604092 podStartE2EDuration="2.584113731s" podCreationTimestamp="2025-12-05 08:48:35 +0000 UTC" firstStartedPulling="2025-12-05 08:48:36.393196594 +0000 UTC m=+7350.462712926" lastFinishedPulling="2025-12-05 08:48:36.804706233 +0000 UTC m=+7350.874222565" observedRunningTime="2025-12-05 08:48:37.554348626 +0000 UTC m=+7351.623864968" watchObservedRunningTime="2025-12-05 08:48:37.584113731 +0000 UTC m=+7351.653630063" Dec 05 08:48:59 crc kubenswrapper[4780]: I1205 08:48:59.908294 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:48:59 crc kubenswrapper[4780]: I1205 08:48:59.908874 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:49:29 crc kubenswrapper[4780]: I1205 08:49:29.908488 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:49:29 crc kubenswrapper[4780]: I1205 08:49:29.909176 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:49:39 crc kubenswrapper[4780]: I1205 08:49:39.086242 4780 generic.go:334] "Generic (PLEG): container finished" podID="4406be2e-554b-4325-90c8-1c2764436e70" containerID="931cdd204598a55fd3839023b89ce459c5cc877276311a6e8024cb705a3c19f8" exitCode=0 Dec 05 08:49:39 crc kubenswrapper[4780]: I1205 08:49:39.086958 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-8j825" event={"ID":"4406be2e-554b-4325-90c8-1c2764436e70","Type":"ContainerDied","Data":"931cdd204598a55fd3839023b89ce459c5cc877276311a6e8024cb705a3c19f8"} Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.559482 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.634544 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ssh-key\") pod \"4406be2e-554b-4325-90c8-1c2764436e70\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.634668 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-inventory\") pod \"4406be2e-554b-4325-90c8-1c2764436e70\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.634764 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ovn-combined-ca-bundle\") pod \"4406be2e-554b-4325-90c8-1c2764436e70\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.634868 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4406be2e-554b-4325-90c8-1c2764436e70-ovncontroller-config-0\") pod \"4406be2e-554b-4325-90c8-1c2764436e70\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.634914 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8jbp\" (UniqueName: \"kubernetes.io/projected/4406be2e-554b-4325-90c8-1c2764436e70-kube-api-access-r8jbp\") pod \"4406be2e-554b-4325-90c8-1c2764436e70\" (UID: \"4406be2e-554b-4325-90c8-1c2764436e70\") " Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.641032 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4406be2e-554b-4325-90c8-1c2764436e70" (UID: "4406be2e-554b-4325-90c8-1c2764436e70"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.644186 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4406be2e-554b-4325-90c8-1c2764436e70-kube-api-access-r8jbp" (OuterVolumeSpecName: "kube-api-access-r8jbp") pod "4406be2e-554b-4325-90c8-1c2764436e70" (UID: "4406be2e-554b-4325-90c8-1c2764436e70"). InnerVolumeSpecName "kube-api-access-r8jbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.662515 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4406be2e-554b-4325-90c8-1c2764436e70-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "4406be2e-554b-4325-90c8-1c2764436e70" (UID: "4406be2e-554b-4325-90c8-1c2764436e70"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.664096 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-inventory" (OuterVolumeSpecName: "inventory") pod "4406be2e-554b-4325-90c8-1c2764436e70" (UID: "4406be2e-554b-4325-90c8-1c2764436e70"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.668774 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4406be2e-554b-4325-90c8-1c2764436e70" (UID: "4406be2e-554b-4325-90c8-1c2764436e70"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.737186 4780 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/4406be2e-554b-4325-90c8-1c2764436e70-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.737223 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8jbp\" (UniqueName: \"kubernetes.io/projected/4406be2e-554b-4325-90c8-1c2764436e70-kube-api-access-r8jbp\") on node \"crc\" DevicePath \"\"" Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.737233 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.737246 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:49:40 crc kubenswrapper[4780]: I1205 08:49:40.737255 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4406be2e-554b-4325-90c8-1c2764436e70-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.104716 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-8j825" event={"ID":"4406be2e-554b-4325-90c8-1c2764436e70","Type":"ContainerDied","Data":"a5e3fd488d2b1653a482237c00429ea88b27d88d708212f3162bcc383ca8c868"} Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.104770 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e3fd488d2b1653a482237c00429ea88b27d88d708212f3162bcc383ca8c868" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.104771 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-8j825" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.220828 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-xrklf"] Dec 05 08:49:41 crc kubenswrapper[4780]: E1205 08:49:41.226201 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4406be2e-554b-4325-90c8-1c2764436e70" containerName="ovn-openstack-openstack-cell1" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.226352 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4406be2e-554b-4325-90c8-1c2764436e70" containerName="ovn-openstack-openstack-cell1" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.226625 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4406be2e-554b-4325-90c8-1c2764436e70" containerName="ovn-openstack-openstack-cell1" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.227499 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.232383 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.232471 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.232602 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.232818 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.232982 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.233685 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.239100 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-xrklf"] Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.349105 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.349568 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.349797 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.349858 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grwq9\" (UniqueName: \"kubernetes.io/projected/99f4a150-ad95-40fe-b058-3cf6d49aed23-kube-api-access-grwq9\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.349985 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.350103 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.452475 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.452534 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grwq9\" (UniqueName: \"kubernetes.io/projected/99f4a150-ad95-40fe-b058-3cf6d49aed23-kube-api-access-grwq9\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.452567 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.452624 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.452675 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.452717 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.458906 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.459458 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.459586 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.460155 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.463613 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.474848 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grwq9\" (UniqueName: \"kubernetes.io/projected/99f4a150-ad95-40fe-b058-3cf6d49aed23-kube-api-access-grwq9\") pod \"neutron-metadata-openstack-openstack-cell1-xrklf\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:41 crc kubenswrapper[4780]: I1205 08:49:41.551450 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:49:42 crc kubenswrapper[4780]: I1205 08:49:42.097037 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-xrklf"] Dec 05 08:49:42 crc kubenswrapper[4780]: I1205 08:49:42.117863 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" event={"ID":"99f4a150-ad95-40fe-b058-3cf6d49aed23","Type":"ContainerStarted","Data":"af96ea95aaf9e375e3e3ea29dad6221b330eb5da273ac9b6fba91721ce0222c1"} Dec 05 08:49:44 crc kubenswrapper[4780]: I1205 08:49:44.153536 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" event={"ID":"99f4a150-ad95-40fe-b058-3cf6d49aed23","Type":"ContainerStarted","Data":"8e536d7c0027981aada9a7008857b1438ca1d0bb4a78570775722d514d2bad2c"} Dec 05 08:49:44 crc kubenswrapper[4780]: I1205 08:49:44.177198 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" podStartSLOduration=2.508840943 podStartE2EDuration="3.177177038s" podCreationTimestamp="2025-12-05 08:49:41 +0000 UTC" firstStartedPulling="2025-12-05 08:49:42.10824 +0000 UTC m=+7416.177756332" lastFinishedPulling="2025-12-05 08:49:42.776576095 +0000 UTC m=+7416.846092427" observedRunningTime="2025-12-05 08:49:44.173169473 +0000 UTC m=+7418.242685835" watchObservedRunningTime="2025-12-05 08:49:44.177177038 +0000 UTC m=+7418.246693380" Dec 05 08:49:59 crc kubenswrapper[4780]: I1205 08:49:59.907667 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:49:59 crc kubenswrapper[4780]: I1205 08:49:59.908270 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:49:59 crc kubenswrapper[4780]: I1205 08:49:59.908314 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 08:49:59 crc kubenswrapper[4780]: I1205 08:49:59.909143 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 08:49:59 crc kubenswrapper[4780]: I1205 08:49:59.909196 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" gracePeriod=600 Dec 05 08:50:00 crc kubenswrapper[4780]: E1205 08:50:00.066050 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:50:00 crc kubenswrapper[4780]: I1205 08:50:00.282418 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" exitCode=0 Dec 05 08:50:00 crc kubenswrapper[4780]: I1205 08:50:00.282469 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256"} Dec 05 08:50:00 crc kubenswrapper[4780]: I1205 08:50:00.282512 4780 scope.go:117] "RemoveContainer" containerID="1cd623fd8841f50a04258d5eda65d0ec55ba5d98c7293c90f5681bc480249024" Dec 05 08:50:00 crc kubenswrapper[4780]: I1205 08:50:00.283248 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:50:00 crc kubenswrapper[4780]: E1205 08:50:00.283543 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:50:11 crc kubenswrapper[4780]: I1205 08:50:11.139408 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:50:11 crc kubenswrapper[4780]: E1205 08:50:11.141327 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:50:23 crc kubenswrapper[4780]: I1205 08:50:23.138975 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:50:23 crc kubenswrapper[4780]: E1205 08:50:23.139721 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:50:34 crc kubenswrapper[4780]: I1205 08:50:34.588705 4780 generic.go:334] "Generic (PLEG): container finished" podID="99f4a150-ad95-40fe-b058-3cf6d49aed23" containerID="8e536d7c0027981aada9a7008857b1438ca1d0bb4a78570775722d514d2bad2c" exitCode=0 Dec 05 08:50:34 crc kubenswrapper[4780]: I1205 08:50:34.588810 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" event={"ID":"99f4a150-ad95-40fe-b058-3cf6d49aed23","Type":"ContainerDied","Data":"8e536d7c0027981aada9a7008857b1438ca1d0bb4a78570775722d514d2bad2c"} Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.019979 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.109842 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwq9\" (UniqueName: \"kubernetes.io/projected/99f4a150-ad95-40fe-b058-3cf6d49aed23-kube-api-access-grwq9\") pod \"99f4a150-ad95-40fe-b058-3cf6d49aed23\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.110037 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-metadata-combined-ca-bundle\") pod \"99f4a150-ad95-40fe-b058-3cf6d49aed23\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.110067 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-ssh-key\") pod \"99f4a150-ad95-40fe-b058-3cf6d49aed23\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.110118 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-nova-metadata-neutron-config-0\") pod \"99f4a150-ad95-40fe-b058-3cf6d49aed23\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.110196 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-inventory\") pod \"99f4a150-ad95-40fe-b058-3cf6d49aed23\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.110239 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-ovn-metadata-agent-neutron-config-0\") pod \"99f4a150-ad95-40fe-b058-3cf6d49aed23\" (UID: \"99f4a150-ad95-40fe-b058-3cf6d49aed23\") " Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.115150 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "99f4a150-ad95-40fe-b058-3cf6d49aed23" (UID: "99f4a150-ad95-40fe-b058-3cf6d49aed23"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.115378 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f4a150-ad95-40fe-b058-3cf6d49aed23-kube-api-access-grwq9" (OuterVolumeSpecName: "kube-api-access-grwq9") pod "99f4a150-ad95-40fe-b058-3cf6d49aed23" (UID: "99f4a150-ad95-40fe-b058-3cf6d49aed23"). InnerVolumeSpecName "kube-api-access-grwq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.147610 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "99f4a150-ad95-40fe-b058-3cf6d49aed23" (UID: "99f4a150-ad95-40fe-b058-3cf6d49aed23"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.150310 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99f4a150-ad95-40fe-b058-3cf6d49aed23" (UID: "99f4a150-ad95-40fe-b058-3cf6d49aed23"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.151382 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "99f4a150-ad95-40fe-b058-3cf6d49aed23" (UID: "99f4a150-ad95-40fe-b058-3cf6d49aed23"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.152161 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-inventory" (OuterVolumeSpecName: "inventory") pod "99f4a150-ad95-40fe-b058-3cf6d49aed23" (UID: "99f4a150-ad95-40fe-b058-3cf6d49aed23"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.213136 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grwq9\" (UniqueName: \"kubernetes.io/projected/99f4a150-ad95-40fe-b058-3cf6d49aed23-kube-api-access-grwq9\") on node \"crc\" DevicePath \"\"" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.213179 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.213192 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.213205 4780 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.213217 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.213229 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/99f4a150-ad95-40fe-b058-3cf6d49aed23-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.608192 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" event={"ID":"99f4a150-ad95-40fe-b058-3cf6d49aed23","Type":"ContainerDied","Data":"af96ea95aaf9e375e3e3ea29dad6221b330eb5da273ac9b6fba91721ce0222c1"} Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.608285 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af96ea95aaf9e375e3e3ea29dad6221b330eb5da273ac9b6fba91721ce0222c1" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.608231 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-xrklf" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.718903 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-bwvvh"] Dec 05 08:50:36 crc kubenswrapper[4780]: E1205 08:50:36.719613 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f4a150-ad95-40fe-b058-3cf6d49aed23" containerName="neutron-metadata-openstack-openstack-cell1" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.719633 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f4a150-ad95-40fe-b058-3cf6d49aed23" containerName="neutron-metadata-openstack-openstack-cell1" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.719850 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f4a150-ad95-40fe-b058-3cf6d49aed23" containerName="neutron-metadata-openstack-openstack-cell1" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.720581 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.725464 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.725828 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.725895 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.726470 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.726598 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.740337 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-bwvvh"] Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.826430 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-ssh-key\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.826537 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bttg4\" (UniqueName: \"kubernetes.io/projected/77265a47-156e-4225-9ca8-0cb7000048b3-kube-api-access-bttg4\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.826781 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.826815 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.826849 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-inventory\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.928805 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.928853 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.928904 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-inventory\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.929094 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-ssh-key\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.929220 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bttg4\" (UniqueName: \"kubernetes.io/projected/77265a47-156e-4225-9ca8-0cb7000048b3-kube-api-access-bttg4\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.933401 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-ssh-key\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.938562 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-inventory\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.939193 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.939456 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:36 crc kubenswrapper[4780]: I1205 08:50:36.949417 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bttg4\" (UniqueName: \"kubernetes.io/projected/77265a47-156e-4225-9ca8-0cb7000048b3-kube-api-access-bttg4\") pod \"libvirt-openstack-openstack-cell1-bwvvh\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:37 crc kubenswrapper[4780]: I1205 08:50:37.040604 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:50:37 crc kubenswrapper[4780]: I1205 08:50:37.601549 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-bwvvh"] Dec 05 08:50:37 crc kubenswrapper[4780]: I1205 08:50:37.609696 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 08:50:37 crc kubenswrapper[4780]: I1205 08:50:37.618491 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" event={"ID":"77265a47-156e-4225-9ca8-0cb7000048b3","Type":"ContainerStarted","Data":"22ba8b01fd68679b1c2f351567c9260277be914f9d20a36c248691f398b4935b"} Dec 05 08:50:38 crc kubenswrapper[4780]: I1205 08:50:38.142636 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:50:38 crc kubenswrapper[4780]: E1205 08:50:38.143773 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:50:38 crc kubenswrapper[4780]: I1205 08:50:38.630865 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" event={"ID":"77265a47-156e-4225-9ca8-0cb7000048b3","Type":"ContainerStarted","Data":"bd98a1bdfe8f1692a8386b52d77002ba6e4035ec48ded8e4228f62a19a8d0ee0"} Dec 05 08:50:38 crc kubenswrapper[4780]: I1205 08:50:38.649145 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" podStartSLOduration=2.202900437 podStartE2EDuration="2.649121714s" podCreationTimestamp="2025-12-05 08:50:36 +0000 UTC" firstStartedPulling="2025-12-05 08:50:37.609436676 +0000 UTC m=+7471.678953008" lastFinishedPulling="2025-12-05 08:50:38.055657953 +0000 UTC m=+7472.125174285" observedRunningTime="2025-12-05 08:50:38.64635351 +0000 UTC m=+7472.715869852" watchObservedRunningTime="2025-12-05 08:50:38.649121714 +0000 UTC m=+7472.718638066" Dec 05 08:50:49 crc kubenswrapper[4780]: I1205 08:50:49.139529 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:50:49 crc kubenswrapper[4780]: E1205 08:50:49.140415 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:51:04 crc kubenswrapper[4780]: I1205 08:51:04.139785 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:51:04 crc kubenswrapper[4780]: E1205 08:51:04.147998 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:51:16 crc kubenswrapper[4780]: I1205 08:51:16.145702 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:51:16 crc kubenswrapper[4780]: E1205 08:51:16.146563 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:51:28 crc kubenswrapper[4780]: I1205 08:51:28.138973 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:51:28 crc kubenswrapper[4780]: E1205 08:51:28.139745 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.743312 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9xvdw"] Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.746165 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.754728 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xvdw"] Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.832584 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-catalog-content\") pod \"certified-operators-9xvdw\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.832734 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzm7x\" (UniqueName: \"kubernetes.io/projected/6ac0a52c-6e10-4808-b927-6a7ce94fc099-kube-api-access-hzm7x\") pod \"certified-operators-9xvdw\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.832768 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-utilities\") pod \"certified-operators-9xvdw\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.934334 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-catalog-content\") pod \"certified-operators-9xvdw\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.934479 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzm7x\" (UniqueName: \"kubernetes.io/projected/6ac0a52c-6e10-4808-b927-6a7ce94fc099-kube-api-access-hzm7x\") pod \"certified-operators-9xvdw\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.934517 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-utilities\") pod \"certified-operators-9xvdw\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.935077 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-utilities\") pod \"certified-operators-9xvdw\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.935206 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-catalog-content\") pod \"certified-operators-9xvdw\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:32 crc kubenswrapper[4780]: I1205 08:51:32.959944 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzm7x\" (UniqueName: \"kubernetes.io/projected/6ac0a52c-6e10-4808-b927-6a7ce94fc099-kube-api-access-hzm7x\") pod \"certified-operators-9xvdw\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:33 crc kubenswrapper[4780]: I1205 08:51:33.070026 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:33 crc kubenswrapper[4780]: I1205 08:51:33.733445 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xvdw"] Dec 05 08:51:34 crc kubenswrapper[4780]: I1205 08:51:34.128992 4780 generic.go:334] "Generic (PLEG): container finished" podID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerID="75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da" exitCode=0 Dec 05 08:51:34 crc kubenswrapper[4780]: I1205 08:51:34.129040 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xvdw" event={"ID":"6ac0a52c-6e10-4808-b927-6a7ce94fc099","Type":"ContainerDied","Data":"75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da"} Dec 05 08:51:34 crc kubenswrapper[4780]: I1205 08:51:34.129242 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xvdw" event={"ID":"6ac0a52c-6e10-4808-b927-6a7ce94fc099","Type":"ContainerStarted","Data":"b24b1cccce59f91ce10fdb0ec47b81b2f51d4bb921aa7b3be474a18873345d46"} Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.141493 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xvdw" event={"ID":"6ac0a52c-6e10-4808-b927-6a7ce94fc099","Type":"ContainerStarted","Data":"426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937"} Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.343570 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wqg8q"] Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.346221 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.364273 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqg8q"] Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.427338 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6r9b\" (UniqueName: \"kubernetes.io/projected/736094ec-007a-40f7-9e2b-53b1ae4ccc31-kube-api-access-g6r9b\") pod \"redhat-operators-wqg8q\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.427481 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-utilities\") pod \"redhat-operators-wqg8q\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.427582 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-catalog-content\") pod \"redhat-operators-wqg8q\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.529809 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-catalog-content\") pod \"redhat-operators-wqg8q\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.529936 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6r9b\" (UniqueName: \"kubernetes.io/projected/736094ec-007a-40f7-9e2b-53b1ae4ccc31-kube-api-access-g6r9b\") pod \"redhat-operators-wqg8q\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.530002 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-utilities\") pod \"redhat-operators-wqg8q\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.530436 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-catalog-content\") pod \"redhat-operators-wqg8q\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.530586 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-utilities\") pod \"redhat-operators-wqg8q\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.554628 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6r9b\" (UniqueName: \"kubernetes.io/projected/736094ec-007a-40f7-9e2b-53b1ae4ccc31-kube-api-access-g6r9b\") pod \"redhat-operators-wqg8q\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:35 crc kubenswrapper[4780]: I1205 08:51:35.663392 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:36 crc kubenswrapper[4780]: I1205 08:51:36.152802 4780 generic.go:334] "Generic (PLEG): container finished" podID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerID="426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937" exitCode=0 Dec 05 08:51:36 crc kubenswrapper[4780]: I1205 08:51:36.152858 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xvdw" event={"ID":"6ac0a52c-6e10-4808-b927-6a7ce94fc099","Type":"ContainerDied","Data":"426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937"} Dec 05 08:51:36 crc kubenswrapper[4780]: W1205 08:51:36.186574 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod736094ec_007a_40f7_9e2b_53b1ae4ccc31.slice/crio-dc60d2cdcdf920c09bcc1ac9f44e83aa22349f942650f7420af1019436a78a05 WatchSource:0}: Error finding container dc60d2cdcdf920c09bcc1ac9f44e83aa22349f942650f7420af1019436a78a05: Status 404 returned error can't find the container with id dc60d2cdcdf920c09bcc1ac9f44e83aa22349f942650f7420af1019436a78a05 Dec 05 08:51:36 crc kubenswrapper[4780]: I1205 08:51:36.192540 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqg8q"] Dec 05 08:51:37 crc kubenswrapper[4780]: I1205 08:51:37.171276 4780 generic.go:334] "Generic (PLEG): container finished" podID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerID="b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece" exitCode=0 Dec 05 08:51:37 crc kubenswrapper[4780]: I1205 08:51:37.171401 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqg8q" event={"ID":"736094ec-007a-40f7-9e2b-53b1ae4ccc31","Type":"ContainerDied","Data":"b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece"} Dec 05 08:51:37 crc kubenswrapper[4780]: I1205 08:51:37.171786 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqg8q" event={"ID":"736094ec-007a-40f7-9e2b-53b1ae4ccc31","Type":"ContainerStarted","Data":"dc60d2cdcdf920c09bcc1ac9f44e83aa22349f942650f7420af1019436a78a05"} Dec 05 08:51:38 crc kubenswrapper[4780]: I1205 08:51:38.186131 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xvdw" event={"ID":"6ac0a52c-6e10-4808-b927-6a7ce94fc099","Type":"ContainerStarted","Data":"e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc"} Dec 05 08:51:38 crc kubenswrapper[4780]: I1205 08:51:38.216314 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9xvdw" podStartSLOduration=3.270176025 podStartE2EDuration="6.216291846s" podCreationTimestamp="2025-12-05 08:51:32 +0000 UTC" firstStartedPulling="2025-12-05 08:51:34.13140024 +0000 UTC m=+7528.200916612" lastFinishedPulling="2025-12-05 08:51:37.077516101 +0000 UTC m=+7531.147032433" observedRunningTime="2025-12-05 08:51:38.213192623 +0000 UTC m=+7532.282708965" watchObservedRunningTime="2025-12-05 08:51:38.216291846 +0000 UTC m=+7532.285808178" Dec 05 08:51:39 crc kubenswrapper[4780]: I1205 08:51:39.139593 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:51:39 crc kubenswrapper[4780]: E1205 08:51:39.140299 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:51:39 crc kubenswrapper[4780]: I1205 08:51:39.198098 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqg8q" event={"ID":"736094ec-007a-40f7-9e2b-53b1ae4ccc31","Type":"ContainerStarted","Data":"ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af"} Dec 05 08:51:43 crc kubenswrapper[4780]: I1205 08:51:43.070176 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:43 crc kubenswrapper[4780]: I1205 08:51:43.071042 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:43 crc kubenswrapper[4780]: I1205 08:51:43.115305 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:43 crc kubenswrapper[4780]: I1205 08:51:43.235522 4780 generic.go:334] "Generic (PLEG): container finished" podID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerID="ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af" exitCode=0 Dec 05 08:51:43 crc kubenswrapper[4780]: I1205 08:51:43.235733 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqg8q" event={"ID":"736094ec-007a-40f7-9e2b-53b1ae4ccc31","Type":"ContainerDied","Data":"ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af"} Dec 05 08:51:43 crc kubenswrapper[4780]: I1205 08:51:43.286392 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:44 crc kubenswrapper[4780]: I1205 08:51:44.247286 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqg8q" event={"ID":"736094ec-007a-40f7-9e2b-53b1ae4ccc31","Type":"ContainerStarted","Data":"7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb"} Dec 05 08:51:44 crc kubenswrapper[4780]: I1205 08:51:44.265858 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wqg8q" podStartSLOduration=2.789003039 podStartE2EDuration="9.265840623s" podCreationTimestamp="2025-12-05 08:51:35 +0000 UTC" firstStartedPulling="2025-12-05 08:51:37.174325037 +0000 UTC m=+7531.243841369" lastFinishedPulling="2025-12-05 08:51:43.651162621 +0000 UTC m=+7537.720678953" observedRunningTime="2025-12-05 08:51:44.262254959 +0000 UTC m=+7538.331771311" watchObservedRunningTime="2025-12-05 08:51:44.265840623 +0000 UTC m=+7538.335356955" Dec 05 08:51:44 crc kubenswrapper[4780]: I1205 08:51:44.348063 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xvdw"] Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.258168 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9xvdw" podUID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerName="registry-server" containerID="cri-o://e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc" gracePeriod=2 Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.664099 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.664440 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.822803 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.935466 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-utilities\") pod \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.935703 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzm7x\" (UniqueName: \"kubernetes.io/projected/6ac0a52c-6e10-4808-b927-6a7ce94fc099-kube-api-access-hzm7x\") pod \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.935786 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-catalog-content\") pod \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\" (UID: \"6ac0a52c-6e10-4808-b927-6a7ce94fc099\") " Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.936007 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-utilities" (OuterVolumeSpecName: "utilities") pod "6ac0a52c-6e10-4808-b927-6a7ce94fc099" (UID: "6ac0a52c-6e10-4808-b927-6a7ce94fc099"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.936268 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.941513 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac0a52c-6e10-4808-b927-6a7ce94fc099-kube-api-access-hzm7x" (OuterVolumeSpecName: "kube-api-access-hzm7x") pod "6ac0a52c-6e10-4808-b927-6a7ce94fc099" (UID: "6ac0a52c-6e10-4808-b927-6a7ce94fc099"). InnerVolumeSpecName "kube-api-access-hzm7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:51:45 crc kubenswrapper[4780]: I1205 08:51:45.978855 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ac0a52c-6e10-4808-b927-6a7ce94fc099" (UID: "6ac0a52c-6e10-4808-b927-6a7ce94fc099"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.038583 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzm7x\" (UniqueName: \"kubernetes.io/projected/6ac0a52c-6e10-4808-b927-6a7ce94fc099-kube-api-access-hzm7x\") on node \"crc\" DevicePath \"\"" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.038632 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a52c-6e10-4808-b927-6a7ce94fc099-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.270767 4780 generic.go:334] "Generic (PLEG): container finished" podID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerID="e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc" exitCode=0 Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.270822 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xvdw" event={"ID":"6ac0a52c-6e10-4808-b927-6a7ce94fc099","Type":"ContainerDied","Data":"e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc"} Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.270859 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xvdw" event={"ID":"6ac0a52c-6e10-4808-b927-6a7ce94fc099","Type":"ContainerDied","Data":"b24b1cccce59f91ce10fdb0ec47b81b2f51d4bb921aa7b3be474a18873345d46"} Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.270895 4780 scope.go:117] "RemoveContainer" containerID="e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.270932 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xvdw" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.296231 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xvdw"] Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.302348 4780 scope.go:117] "RemoveContainer" containerID="426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.307599 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9xvdw"] Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.354474 4780 scope.go:117] "RemoveContainer" containerID="75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.376931 4780 scope.go:117] "RemoveContainer" containerID="e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc" Dec 05 08:51:46 crc kubenswrapper[4780]: E1205 08:51:46.377614 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc\": container with ID starting with e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc not found: ID does not exist" containerID="e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.377667 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc"} err="failed to get container status \"e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc\": rpc error: code = NotFound desc = could not find container \"e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc\": container with ID starting with e52b191685cd8b987afd326cf2764c421ff438429f3994e06aa475736c2d94dc not found: ID does not exist" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.377704 4780 scope.go:117] "RemoveContainer" containerID="426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937" Dec 05 08:51:46 crc kubenswrapper[4780]: E1205 08:51:46.378262 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937\": container with ID starting with 426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937 not found: ID does not exist" containerID="426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.378298 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937"} err="failed to get container status \"426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937\": rpc error: code = NotFound desc = could not find container \"426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937\": container with ID starting with 426342cbfad28f18d55ea6d69fa231f940cb8a39cc19b60f738604dc66a5f937 not found: ID does not exist" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.378317 4780 scope.go:117] "RemoveContainer" containerID="75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da" Dec 05 08:51:46 crc kubenswrapper[4780]: E1205 08:51:46.378831 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da\": container with ID starting with 75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da not found: ID does not exist" containerID="75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.378928 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da"} err="failed to get container status \"75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da\": rpc error: code = NotFound desc = could not find container \"75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da\": container with ID starting with 75b98caf6a2bb7f4438ce6b5587d0eb714698ef4b9a0682b8169b0d4ef83d3da not found: ID does not exist" Dec 05 08:51:46 crc kubenswrapper[4780]: I1205 08:51:46.707442 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wqg8q" podUID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerName="registry-server" probeResult="failure" output=< Dec 05 08:51:46 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Dec 05 08:51:46 crc kubenswrapper[4780]: > Dec 05 08:51:48 crc kubenswrapper[4780]: I1205 08:51:48.158798 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" path="/var/lib/kubelet/pods/6ac0a52c-6e10-4808-b927-6a7ce94fc099/volumes" Dec 05 08:51:52 crc kubenswrapper[4780]: I1205 08:51:52.139043 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:51:52 crc kubenswrapper[4780]: E1205 08:51:52.139788 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:51:55 crc kubenswrapper[4780]: I1205 08:51:55.713795 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:55 crc kubenswrapper[4780]: I1205 08:51:55.767805 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:55 crc kubenswrapper[4780]: I1205 08:51:55.949226 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wqg8q"] Dec 05 08:51:57 crc kubenswrapper[4780]: I1205 08:51:57.370361 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wqg8q" podUID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerName="registry-server" containerID="cri-o://7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb" gracePeriod=2 Dec 05 08:51:57 crc kubenswrapper[4780]: I1205 08:51:57.849019 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:57 crc kubenswrapper[4780]: I1205 08:51:57.976142 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-catalog-content\") pod \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " Dec 05 08:51:57 crc kubenswrapper[4780]: I1205 08:51:57.976314 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6r9b\" (UniqueName: \"kubernetes.io/projected/736094ec-007a-40f7-9e2b-53b1ae4ccc31-kube-api-access-g6r9b\") pod \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " Dec 05 08:51:57 crc kubenswrapper[4780]: I1205 08:51:57.976515 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-utilities\") pod \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\" (UID: \"736094ec-007a-40f7-9e2b-53b1ae4ccc31\") " Dec 05 08:51:57 crc kubenswrapper[4780]: I1205 08:51:57.977119 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-utilities" (OuterVolumeSpecName: "utilities") pod "736094ec-007a-40f7-9e2b-53b1ae4ccc31" (UID: "736094ec-007a-40f7-9e2b-53b1ae4ccc31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:51:57 crc kubenswrapper[4780]: I1205 08:51:57.986693 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736094ec-007a-40f7-9e2b-53b1ae4ccc31-kube-api-access-g6r9b" (OuterVolumeSpecName: "kube-api-access-g6r9b") pod "736094ec-007a-40f7-9e2b-53b1ae4ccc31" (UID: "736094ec-007a-40f7-9e2b-53b1ae4ccc31"). InnerVolumeSpecName "kube-api-access-g6r9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.079146 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.079189 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6r9b\" (UniqueName: \"kubernetes.io/projected/736094ec-007a-40f7-9e2b-53b1ae4ccc31-kube-api-access-g6r9b\") on node \"crc\" DevicePath \"\"" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.099192 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "736094ec-007a-40f7-9e2b-53b1ae4ccc31" (UID: "736094ec-007a-40f7-9e2b-53b1ae4ccc31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.181545 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/736094ec-007a-40f7-9e2b-53b1ae4ccc31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.382554 4780 generic.go:334] "Generic (PLEG): container finished" podID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerID="7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb" exitCode=0 Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.382602 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqg8q" event={"ID":"736094ec-007a-40f7-9e2b-53b1ae4ccc31","Type":"ContainerDied","Data":"7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb"} Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.382619 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqg8q" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.382647 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqg8q" event={"ID":"736094ec-007a-40f7-9e2b-53b1ae4ccc31","Type":"ContainerDied","Data":"dc60d2cdcdf920c09bcc1ac9f44e83aa22349f942650f7420af1019436a78a05"} Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.382682 4780 scope.go:117] "RemoveContainer" containerID="7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.407607 4780 scope.go:117] "RemoveContainer" containerID="ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.409834 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wqg8q"] Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.424304 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wqg8q"] Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.435232 4780 scope.go:117] "RemoveContainer" containerID="b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.475971 4780 scope.go:117] "RemoveContainer" containerID="7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb" Dec 05 08:51:58 crc kubenswrapper[4780]: E1205 08:51:58.476516 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb\": container with ID starting with 7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb not found: ID does not exist" containerID="7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.476576 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb"} err="failed to get container status \"7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb\": rpc error: code = NotFound desc = could not find container \"7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb\": container with ID starting with 7201b49fb958c1dda510a12166decfa27deb5f3e81e85bf08bcd6538754124cb not found: ID does not exist" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.476614 4780 scope.go:117] "RemoveContainer" containerID="ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af" Dec 05 08:51:58 crc kubenswrapper[4780]: E1205 08:51:58.477046 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af\": container with ID starting with ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af not found: ID does not exist" containerID="ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.477090 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af"} err="failed to get container status \"ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af\": rpc error: code = NotFound desc = could not find container \"ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af\": container with ID starting with ba4e7839f22beafd3b654a7784c516eeeaf8d03b828c6f4da3a19467267d71af not found: ID does not exist" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.477118 4780 scope.go:117] "RemoveContainer" containerID="b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece" Dec 05 08:51:58 crc kubenswrapper[4780]: E1205 08:51:58.477539 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece\": container with ID starting with b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece not found: ID does not exist" containerID="b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece" Dec 05 08:51:58 crc kubenswrapper[4780]: I1205 08:51:58.477719 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece"} err="failed to get container status \"b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece\": rpc error: code = NotFound desc = could not find container \"b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece\": container with ID starting with b31c264b82ee5b5f8f75844cb94d09d0d584d282ad68ba4e39f9019b67e27ece not found: ID does not exist" Dec 05 08:52:00 crc kubenswrapper[4780]: I1205 08:52:00.150085 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" path="/var/lib/kubelet/pods/736094ec-007a-40f7-9e2b-53b1ae4ccc31/volumes" Dec 05 08:52:03 crc kubenswrapper[4780]: I1205 08:52:03.139324 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:52:03 crc kubenswrapper[4780]: E1205 08:52:03.139957 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:52:15 crc kubenswrapper[4780]: I1205 08:52:15.139399 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:52:15 crc kubenswrapper[4780]: E1205 08:52:15.140319 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:52:29 crc kubenswrapper[4780]: I1205 08:52:29.139228 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:52:29 crc kubenswrapper[4780]: E1205 08:52:29.140127 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:52:40 crc kubenswrapper[4780]: I1205 08:52:40.139673 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:52:40 crc kubenswrapper[4780]: E1205 08:52:40.140641 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:52:52 crc kubenswrapper[4780]: I1205 08:52:52.138717 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:52:52 crc kubenswrapper[4780]: E1205 08:52:52.139563 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:53:03 crc kubenswrapper[4780]: I1205 08:53:03.139112 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:53:03 crc kubenswrapper[4780]: E1205 08:53:03.139978 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:53:17 crc kubenswrapper[4780]: I1205 08:53:17.139569 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:53:17 crc kubenswrapper[4780]: E1205 08:53:17.140301 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:53:32 crc kubenswrapper[4780]: I1205 08:53:32.139414 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:53:32 crc kubenswrapper[4780]: E1205 08:53:32.140038 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:53:43 crc kubenswrapper[4780]: I1205 08:53:43.139175 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:53:43 crc kubenswrapper[4780]: E1205 08:53:43.140041 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:53:56 crc kubenswrapper[4780]: I1205 08:53:56.147754 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:53:56 crc kubenswrapper[4780]: E1205 08:53:56.148801 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:54:07 crc kubenswrapper[4780]: I1205 08:54:07.139726 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:54:07 crc kubenswrapper[4780]: E1205 08:54:07.140814 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:54:22 crc kubenswrapper[4780]: I1205 08:54:22.138750 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:54:22 crc kubenswrapper[4780]: E1205 08:54:22.140682 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:54:37 crc kubenswrapper[4780]: I1205 08:54:37.139033 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:54:37 crc kubenswrapper[4780]: E1205 08:54:37.140151 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:54:49 crc kubenswrapper[4780]: I1205 08:54:49.139417 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:54:49 crc kubenswrapper[4780]: E1205 08:54:49.140543 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 08:55:02 crc kubenswrapper[4780]: I1205 08:55:02.139379 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:55:03 crc kubenswrapper[4780]: I1205 08:55:03.140995 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"68ee7fc36dd7ec6b15590ff20bd042f823d5d6e91a960e7981d60dcbeb906daa"} Dec 05 08:55:17 crc kubenswrapper[4780]: I1205 08:55:17.283957 4780 generic.go:334] "Generic (PLEG): container finished" podID="77265a47-156e-4225-9ca8-0cb7000048b3" containerID="bd98a1bdfe8f1692a8386b52d77002ba6e4035ec48ded8e4228f62a19a8d0ee0" exitCode=0 Dec 05 08:55:17 crc kubenswrapper[4780]: I1205 08:55:17.284116 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" event={"ID":"77265a47-156e-4225-9ca8-0cb7000048b3","Type":"ContainerDied","Data":"bd98a1bdfe8f1692a8386b52d77002ba6e4035ec48ded8e4228f62a19a8d0ee0"} Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.707685 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.783274 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bttg4\" (UniqueName: \"kubernetes.io/projected/77265a47-156e-4225-9ca8-0cb7000048b3-kube-api-access-bttg4\") pod \"77265a47-156e-4225-9ca8-0cb7000048b3\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.783369 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-combined-ca-bundle\") pod \"77265a47-156e-4225-9ca8-0cb7000048b3\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.783413 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-secret-0\") pod \"77265a47-156e-4225-9ca8-0cb7000048b3\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.783528 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-ssh-key\") pod \"77265a47-156e-4225-9ca8-0cb7000048b3\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.783588 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-inventory\") pod \"77265a47-156e-4225-9ca8-0cb7000048b3\" (UID: \"77265a47-156e-4225-9ca8-0cb7000048b3\") " Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.789418 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "77265a47-156e-4225-9ca8-0cb7000048b3" (UID: "77265a47-156e-4225-9ca8-0cb7000048b3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.789611 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77265a47-156e-4225-9ca8-0cb7000048b3-kube-api-access-bttg4" (OuterVolumeSpecName: "kube-api-access-bttg4") pod "77265a47-156e-4225-9ca8-0cb7000048b3" (UID: "77265a47-156e-4225-9ca8-0cb7000048b3"). InnerVolumeSpecName "kube-api-access-bttg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.812285 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "77265a47-156e-4225-9ca8-0cb7000048b3" (UID: "77265a47-156e-4225-9ca8-0cb7000048b3"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.813847 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-inventory" (OuterVolumeSpecName: "inventory") pod "77265a47-156e-4225-9ca8-0cb7000048b3" (UID: "77265a47-156e-4225-9ca8-0cb7000048b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.814052 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "77265a47-156e-4225-9ca8-0cb7000048b3" (UID: "77265a47-156e-4225-9ca8-0cb7000048b3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.885791 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bttg4\" (UniqueName: \"kubernetes.io/projected/77265a47-156e-4225-9ca8-0cb7000048b3-kube-api-access-bttg4\") on node \"crc\" DevicePath \"\"" Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.885829 4780 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.885842 4780 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.885850 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:55:18 crc kubenswrapper[4780]: I1205 08:55:18.885860 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77265a47-156e-4225-9ca8-0cb7000048b3-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.317672 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" event={"ID":"77265a47-156e-4225-9ca8-0cb7000048b3","Type":"ContainerDied","Data":"22ba8b01fd68679b1c2f351567c9260277be914f9d20a36c248691f398b4935b"} Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.317749 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ba8b01fd68679b1c2f351567c9260277be914f9d20a36c248691f398b4935b" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.317854 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-bwvvh" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.416715 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-mjwl8"] Dec 05 08:55:19 crc kubenswrapper[4780]: E1205 08:55:19.417204 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerName="registry-server" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.417227 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerName="registry-server" Dec 05 08:55:19 crc kubenswrapper[4780]: E1205 08:55:19.417254 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerName="extract-utilities" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.417263 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerName="extract-utilities" Dec 05 08:55:19 crc kubenswrapper[4780]: E1205 08:55:19.417287 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerName="extract-utilities" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.417297 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerName="extract-utilities" Dec 05 08:55:19 crc kubenswrapper[4780]: E1205 08:55:19.417314 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77265a47-156e-4225-9ca8-0cb7000048b3" containerName="libvirt-openstack-openstack-cell1" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.417323 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="77265a47-156e-4225-9ca8-0cb7000048b3" containerName="libvirt-openstack-openstack-cell1" Dec 05 08:55:19 crc kubenswrapper[4780]: E1205 08:55:19.417347 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerName="extract-content" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.417354 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerName="extract-content" Dec 05 08:55:19 crc kubenswrapper[4780]: E1205 08:55:19.417373 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerName="extract-content" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.417382 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerName="extract-content" Dec 05 08:55:19 crc kubenswrapper[4780]: E1205 08:55:19.417404 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerName="registry-server" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.417411 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerName="registry-server" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.417764 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="736094ec-007a-40f7-9e2b-53b1ae4ccc31" containerName="registry-server" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.417784 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac0a52c-6e10-4808-b927-6a7ce94fc099" containerName="registry-server" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.417810 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="77265a47-156e-4225-9ca8-0cb7000048b3" containerName="libvirt-openstack-openstack-cell1" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.422099 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.426624 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.426846 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.426972 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.427298 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.427355 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.427572 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.429306 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.447167 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-mjwl8"] Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.498823 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.499171 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.499291 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-inventory\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.499382 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.499519 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.499629 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.499717 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.499850 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6xhf\" (UniqueName: \"kubernetes.io/projected/3fc60cde-15e4-44b7-a344-60f6420d9374-kube-api-access-s6xhf\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.500005 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.602100 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.602163 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.602206 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-inventory\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.602230 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.602274 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.602302 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.602328 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.602392 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6xhf\" (UniqueName: \"kubernetes.io/projected/3fc60cde-15e4-44b7-a344-60f6420d9374-kube-api-access-s6xhf\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.602428 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.603999 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.607078 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.607271 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.607560 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.607759 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.607761 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.607830 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-inventory\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.609618 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.627222 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6xhf\" (UniqueName: \"kubernetes.io/projected/3fc60cde-15e4-44b7-a344-60f6420d9374-kube-api-access-s6xhf\") pod \"nova-cell1-openstack-openstack-cell1-mjwl8\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:19 crc kubenswrapper[4780]: I1205 08:55:19.758024 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:55:20 crc kubenswrapper[4780]: I1205 08:55:20.297986 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-mjwl8"] Dec 05 08:55:20 crc kubenswrapper[4780]: W1205 08:55:20.301035 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc60cde_15e4_44b7_a344_60f6420d9374.slice/crio-c63acfb08a7db3c8fc561806f238c9c68f76ad7ba28d121600e63816e98db542 WatchSource:0}: Error finding container c63acfb08a7db3c8fc561806f238c9c68f76ad7ba28d121600e63816e98db542: Status 404 returned error can't find the container with id c63acfb08a7db3c8fc561806f238c9c68f76ad7ba28d121600e63816e98db542 Dec 05 08:55:20 crc kubenswrapper[4780]: I1205 08:55:20.327004 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" event={"ID":"3fc60cde-15e4-44b7-a344-60f6420d9374","Type":"ContainerStarted","Data":"c63acfb08a7db3c8fc561806f238c9c68f76ad7ba28d121600e63816e98db542"} Dec 05 08:55:22 crc kubenswrapper[4780]: I1205 08:55:22.362225 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" event={"ID":"3fc60cde-15e4-44b7-a344-60f6420d9374","Type":"ContainerStarted","Data":"bcb6aa7a0bf19265c02a0b21fa508b3105e3514a02b498b6dbbb36e8b85c9874"} Dec 05 08:55:22 crc kubenswrapper[4780]: I1205 08:55:22.386875 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" podStartSLOduration=1.847103242 podStartE2EDuration="3.386853464s" podCreationTimestamp="2025-12-05 08:55:19 +0000 UTC" firstStartedPulling="2025-12-05 08:55:20.303694458 +0000 UTC m=+7754.373210800" lastFinishedPulling="2025-12-05 08:55:21.84344467 +0000 UTC m=+7755.912961022" observedRunningTime="2025-12-05 08:55:22.37991307 +0000 UTC m=+7756.449429412" watchObservedRunningTime="2025-12-05 08:55:22.386853464 +0000 UTC m=+7756.456369806" Dec 05 08:57:29 crc kubenswrapper[4780]: I1205 08:57:29.908303 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:57:29 crc kubenswrapper[4780]: I1205 08:57:29.908913 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.156500 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2hpkv"] Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.168092 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.172778 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmtrp\" (UniqueName: \"kubernetes.io/projected/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-kube-api-access-dmtrp\") pod \"community-operators-2hpkv\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.187384 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-utilities\") pod \"community-operators-2hpkv\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.187848 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-catalog-content\") pod \"community-operators-2hpkv\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.186135 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hpkv"] Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.289650 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmtrp\" (UniqueName: \"kubernetes.io/projected/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-kube-api-access-dmtrp\") pod \"community-operators-2hpkv\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.289772 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-utilities\") pod \"community-operators-2hpkv\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.290536 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-utilities\") pod \"community-operators-2hpkv\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.290982 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-catalog-content\") pod \"community-operators-2hpkv\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.291607 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-catalog-content\") pod \"community-operators-2hpkv\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.316705 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmtrp\" (UniqueName: \"kubernetes.io/projected/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-kube-api-access-dmtrp\") pod \"community-operators-2hpkv\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:48 crc kubenswrapper[4780]: I1205 08:57:48.510658 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:49 crc kubenswrapper[4780]: I1205 08:57:49.049226 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hpkv"] Dec 05 08:57:49 crc kubenswrapper[4780]: I1205 08:57:49.744361 4780 generic.go:334] "Generic (PLEG): container finished" podID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerID="d5e9e71a0b584e3ddc7f7cc34049519a493a76e73a0578ba3c6c38cb0fc2c0c7" exitCode=0 Dec 05 08:57:49 crc kubenswrapper[4780]: I1205 08:57:49.745828 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hpkv" event={"ID":"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0","Type":"ContainerDied","Data":"d5e9e71a0b584e3ddc7f7cc34049519a493a76e73a0578ba3c6c38cb0fc2c0c7"} Dec 05 08:57:49 crc kubenswrapper[4780]: I1205 08:57:49.745852 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hpkv" event={"ID":"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0","Type":"ContainerStarted","Data":"568ac17d767949641b8ca6a1500a3ef15d09f959477ff3e97444587c4d487474"} Dec 05 08:57:49 crc kubenswrapper[4780]: I1205 08:57:49.746352 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 08:57:50 crc kubenswrapper[4780]: I1205 08:57:50.757124 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hpkv" event={"ID":"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0","Type":"ContainerStarted","Data":"3a8b6f47851db65ef9624e629aeafc70113f3a66ec9a51682f9f5216fc8ff823"} Dec 05 08:57:51 crc kubenswrapper[4780]: I1205 08:57:51.766640 4780 generic.go:334] "Generic (PLEG): container finished" podID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerID="3a8b6f47851db65ef9624e629aeafc70113f3a66ec9a51682f9f5216fc8ff823" exitCode=0 Dec 05 08:57:51 crc kubenswrapper[4780]: I1205 08:57:51.766970 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hpkv" event={"ID":"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0","Type":"ContainerDied","Data":"3a8b6f47851db65ef9624e629aeafc70113f3a66ec9a51682f9f5216fc8ff823"} Dec 05 08:57:53 crc kubenswrapper[4780]: I1205 08:57:53.788813 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hpkv" event={"ID":"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0","Type":"ContainerStarted","Data":"d58451a5dfd7cc21b0b80247227016b68cf96087abd81f4b9aa51ba7d5dfe137"} Dec 05 08:57:53 crc kubenswrapper[4780]: I1205 08:57:53.815292 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2hpkv" podStartSLOduration=2.800130317 podStartE2EDuration="5.815274395s" podCreationTimestamp="2025-12-05 08:57:48 +0000 UTC" firstStartedPulling="2025-12-05 08:57:49.746084158 +0000 UTC m=+7903.815600490" lastFinishedPulling="2025-12-05 08:57:52.761228236 +0000 UTC m=+7906.830744568" observedRunningTime="2025-12-05 08:57:53.807317453 +0000 UTC m=+7907.876833785" watchObservedRunningTime="2025-12-05 08:57:53.815274395 +0000 UTC m=+7907.884790727" Dec 05 08:57:58 crc kubenswrapper[4780]: I1205 08:57:58.511134 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:58 crc kubenswrapper[4780]: I1205 08:57:58.511784 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:58 crc kubenswrapper[4780]: I1205 08:57:58.561996 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:58 crc kubenswrapper[4780]: I1205 08:57:58.883203 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:57:58 crc kubenswrapper[4780]: I1205 08:57:58.933072 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hpkv"] Dec 05 08:57:59 crc kubenswrapper[4780]: I1205 08:57:59.910190 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:57:59 crc kubenswrapper[4780]: I1205 08:57:59.910420 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:58:00 crc kubenswrapper[4780]: I1205 08:58:00.851395 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2hpkv" podUID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerName="registry-server" containerID="cri-o://d58451a5dfd7cc21b0b80247227016b68cf96087abd81f4b9aa51ba7d5dfe137" gracePeriod=2 Dec 05 08:58:03 crc kubenswrapper[4780]: I1205 08:58:03.877909 4780 generic.go:334] "Generic (PLEG): container finished" podID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerID="d58451a5dfd7cc21b0b80247227016b68cf96087abd81f4b9aa51ba7d5dfe137" exitCode=0 Dec 05 08:58:03 crc kubenswrapper[4780]: I1205 08:58:03.877911 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hpkv" event={"ID":"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0","Type":"ContainerDied","Data":"d58451a5dfd7cc21b0b80247227016b68cf96087abd81f4b9aa51ba7d5dfe137"} Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.026702 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.156370 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-utilities\") pod \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.156583 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-catalog-content\") pod \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.156635 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmtrp\" (UniqueName: \"kubernetes.io/projected/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-kube-api-access-dmtrp\") pod \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\" (UID: \"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0\") " Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.157570 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-utilities" (OuterVolumeSpecName: "utilities") pod "34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" (UID: "34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.158099 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.165126 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-kube-api-access-dmtrp" (OuterVolumeSpecName: "kube-api-access-dmtrp") pod "34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" (UID: "34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0"). InnerVolumeSpecName "kube-api-access-dmtrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.215076 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" (UID: "34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.259601 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.259848 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmtrp\" (UniqueName: \"kubernetes.io/projected/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0-kube-api-access-dmtrp\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.897255 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hpkv" event={"ID":"34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0","Type":"ContainerDied","Data":"568ac17d767949641b8ca6a1500a3ef15d09f959477ff3e97444587c4d487474"} Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.897324 4780 scope.go:117] "RemoveContainer" containerID="d58451a5dfd7cc21b0b80247227016b68cf96087abd81f4b9aa51ba7d5dfe137" Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.897641 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hpkv" Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.921027 4780 scope.go:117] "RemoveContainer" containerID="3a8b6f47851db65ef9624e629aeafc70113f3a66ec9a51682f9f5216fc8ff823" Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.943302 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hpkv"] Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.957634 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2hpkv"] Dec 05 08:58:05 crc kubenswrapper[4780]: I1205 08:58:05.967605 4780 scope.go:117] "RemoveContainer" containerID="d5e9e71a0b584e3ddc7f7cc34049519a493a76e73a0578ba3c6c38cb0fc2c0c7" Dec 05 08:58:06 crc kubenswrapper[4780]: I1205 08:58:06.151481 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" path="/var/lib/kubelet/pods/34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0/volumes" Dec 05 08:58:20 crc kubenswrapper[4780]: I1205 08:58:20.025546 4780 generic.go:334] "Generic (PLEG): container finished" podID="3fc60cde-15e4-44b7-a344-60f6420d9374" containerID="bcb6aa7a0bf19265c02a0b21fa508b3105e3514a02b498b6dbbb36e8b85c9874" exitCode=0 Dec 05 08:58:20 crc kubenswrapper[4780]: I1205 08:58:20.025776 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" event={"ID":"3fc60cde-15e4-44b7-a344-60f6420d9374","Type":"ContainerDied","Data":"bcb6aa7a0bf19265c02a0b21fa508b3105e3514a02b498b6dbbb36e8b85c9874"} Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.517018 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.595911 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-1\") pod \"3fc60cde-15e4-44b7-a344-60f6420d9374\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.596342 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cells-global-config-0\") pod \"3fc60cde-15e4-44b7-a344-60f6420d9374\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.596371 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-0\") pod \"3fc60cde-15e4-44b7-a344-60f6420d9374\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.596408 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-ssh-key\") pod \"3fc60cde-15e4-44b7-a344-60f6420d9374\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.596580 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-1\") pod \"3fc60cde-15e4-44b7-a344-60f6420d9374\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.596610 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6xhf\" (UniqueName: \"kubernetes.io/projected/3fc60cde-15e4-44b7-a344-60f6420d9374-kube-api-access-s6xhf\") pod \"3fc60cde-15e4-44b7-a344-60f6420d9374\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.596683 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-0\") pod \"3fc60cde-15e4-44b7-a344-60f6420d9374\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.596811 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-combined-ca-bundle\") pod \"3fc60cde-15e4-44b7-a344-60f6420d9374\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.596844 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-inventory\") pod \"3fc60cde-15e4-44b7-a344-60f6420d9374\" (UID: \"3fc60cde-15e4-44b7-a344-60f6420d9374\") " Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.603792 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "3fc60cde-15e4-44b7-a344-60f6420d9374" (UID: "3fc60cde-15e4-44b7-a344-60f6420d9374"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.615638 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc60cde-15e4-44b7-a344-60f6420d9374-kube-api-access-s6xhf" (OuterVolumeSpecName: "kube-api-access-s6xhf") pod "3fc60cde-15e4-44b7-a344-60f6420d9374" (UID: "3fc60cde-15e4-44b7-a344-60f6420d9374"). InnerVolumeSpecName "kube-api-access-s6xhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.629277 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "3fc60cde-15e4-44b7-a344-60f6420d9374" (UID: "3fc60cde-15e4-44b7-a344-60f6420d9374"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.629520 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "3fc60cde-15e4-44b7-a344-60f6420d9374" (UID: "3fc60cde-15e4-44b7-a344-60f6420d9374"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.630606 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3fc60cde-15e4-44b7-a344-60f6420d9374" (UID: "3fc60cde-15e4-44b7-a344-60f6420d9374"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.631926 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "3fc60cde-15e4-44b7-a344-60f6420d9374" (UID: "3fc60cde-15e4-44b7-a344-60f6420d9374"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.632564 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "3fc60cde-15e4-44b7-a344-60f6420d9374" (UID: "3fc60cde-15e4-44b7-a344-60f6420d9374"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.632840 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-inventory" (OuterVolumeSpecName: "inventory") pod "3fc60cde-15e4-44b7-a344-60f6420d9374" (UID: "3fc60cde-15e4-44b7-a344-60f6420d9374"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.654446 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "3fc60cde-15e4-44b7-a344-60f6420d9374" (UID: "3fc60cde-15e4-44b7-a344-60f6420d9374"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.700794 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.700827 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.700839 4780 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.700849 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.700857 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.700912 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.700922 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.700930 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6xhf\" (UniqueName: \"kubernetes.io/projected/3fc60cde-15e4-44b7-a344-60f6420d9374-kube-api-access-s6xhf\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:21 crc kubenswrapper[4780]: I1205 08:58:21.700940 4780 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3fc60cde-15e4-44b7-a344-60f6420d9374-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.046479 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" event={"ID":"3fc60cde-15e4-44b7-a344-60f6420d9374","Type":"ContainerDied","Data":"c63acfb08a7db3c8fc561806f238c9c68f76ad7ba28d121600e63816e98db542"} Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.046520 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c63acfb08a7db3c8fc561806f238c9c68f76ad7ba28d121600e63816e98db542" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.046578 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-mjwl8" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.151456 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-88kbg"] Dec 05 08:58:22 crc kubenswrapper[4780]: E1205 08:58:22.151962 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerName="extract-content" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.151985 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerName="extract-content" Dec 05 08:58:22 crc kubenswrapper[4780]: E1205 08:58:22.151999 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerName="registry-server" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.152007 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerName="registry-server" Dec 05 08:58:22 crc kubenswrapper[4780]: E1205 08:58:22.152042 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc60cde-15e4-44b7-a344-60f6420d9374" containerName="nova-cell1-openstack-openstack-cell1" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.152049 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc60cde-15e4-44b7-a344-60f6420d9374" containerName="nova-cell1-openstack-openstack-cell1" Dec 05 08:58:22 crc kubenswrapper[4780]: E1205 08:58:22.152072 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerName="extract-utilities" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.152080 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerName="extract-utilities" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.152299 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc60cde-15e4-44b7-a344-60f6420d9374" containerName="nova-cell1-openstack-openstack-cell1" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.152324 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="34dd38b9-3ab1-4293-8fb1-f15e6ebe08b0" containerName="registry-server" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.153502 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.156699 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.156901 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.157033 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.157139 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.158861 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.164999 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-88kbg"] Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.220788 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.220924 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.220981 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-inventory\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.221022 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s796v\" (UniqueName: \"kubernetes.io/projected/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-kube-api-access-s796v\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.221101 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.222312 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ssh-key\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.222389 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.325287 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.325455 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ssh-key\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.325525 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.325590 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.325663 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.325705 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-inventory\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.325738 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s796v\" (UniqueName: \"kubernetes.io/projected/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-kube-api-access-s796v\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.329214 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.329214 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.329994 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ssh-key\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.330214 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.330843 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-inventory\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.331246 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.345725 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s796v\" (UniqueName: \"kubernetes.io/projected/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-kube-api-access-s796v\") pod \"telemetry-openstack-openstack-cell1-88kbg\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:22 crc kubenswrapper[4780]: I1205 08:58:22.474453 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 08:58:23 crc kubenswrapper[4780]: I1205 08:58:23.031111 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-88kbg"] Dec 05 08:58:23 crc kubenswrapper[4780]: I1205 08:58:23.069217 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-88kbg" event={"ID":"80fafa83-0a64-48b0-9bd9-a5c59a344b8d","Type":"ContainerStarted","Data":"c32d9fce72f56de73749a9ee34a334a56303a270658f3a15145c664201bfa8ac"} Dec 05 08:58:24 crc kubenswrapper[4780]: I1205 08:58:24.079312 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-88kbg" event={"ID":"80fafa83-0a64-48b0-9bd9-a5c59a344b8d","Type":"ContainerStarted","Data":"86a4d7805dac9fdd1c8d3fad440289c831decfb412429e72198405caf27747f5"} Dec 05 08:58:24 crc kubenswrapper[4780]: I1205 08:58:24.101725 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-88kbg" podStartSLOduration=1.668860568 podStartE2EDuration="2.101707351s" podCreationTimestamp="2025-12-05 08:58:22 +0000 UTC" firstStartedPulling="2025-12-05 08:58:23.042072064 +0000 UTC m=+7937.111588396" lastFinishedPulling="2025-12-05 08:58:23.474918847 +0000 UTC m=+7937.544435179" observedRunningTime="2025-12-05 08:58:24.097340935 +0000 UTC m=+7938.166857257" watchObservedRunningTime="2025-12-05 08:58:24.101707351 +0000 UTC m=+7938.171223683" Dec 05 08:58:29 crc kubenswrapper[4780]: I1205 08:58:29.908505 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 08:58:29 crc kubenswrapper[4780]: I1205 08:58:29.909057 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 08:58:29 crc kubenswrapper[4780]: I1205 08:58:29.909113 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 08:58:29 crc kubenswrapper[4780]: I1205 08:58:29.909909 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68ee7fc36dd7ec6b15590ff20bd042f823d5d6e91a960e7981d60dcbeb906daa"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 08:58:29 crc kubenswrapper[4780]: I1205 08:58:29.909967 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://68ee7fc36dd7ec6b15590ff20bd042f823d5d6e91a960e7981d60dcbeb906daa" gracePeriod=600 Dec 05 08:58:30 crc kubenswrapper[4780]: I1205 08:58:30.137007 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="68ee7fc36dd7ec6b15590ff20bd042f823d5d6e91a960e7981d60dcbeb906daa" exitCode=0 Dec 05 08:58:30 crc kubenswrapper[4780]: I1205 08:58:30.137065 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"68ee7fc36dd7ec6b15590ff20bd042f823d5d6e91a960e7981d60dcbeb906daa"} Dec 05 08:58:30 crc kubenswrapper[4780]: I1205 08:58:30.137110 4780 scope.go:117] "RemoveContainer" containerID="c46d8a221cf9a4cdbe4af6b74ff1f5e9c604c89793933e487245531569842256" Dec 05 08:58:31 crc kubenswrapper[4780]: I1205 08:58:31.148009 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1"} Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.162876 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc"] Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.165112 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.167313 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.167643 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.179999 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc"] Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.231267 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/beabb19d-470d-42a1-9db2-ca90b0880b88-secret-volume\") pod \"collect-profiles-29415420-z9smc\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.231349 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beabb19d-470d-42a1-9db2-ca90b0880b88-config-volume\") pod \"collect-profiles-29415420-z9smc\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.231593 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj9x2\" (UniqueName: \"kubernetes.io/projected/beabb19d-470d-42a1-9db2-ca90b0880b88-kube-api-access-mj9x2\") pod \"collect-profiles-29415420-z9smc\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.333534 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/beabb19d-470d-42a1-9db2-ca90b0880b88-secret-volume\") pod \"collect-profiles-29415420-z9smc\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.333674 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beabb19d-470d-42a1-9db2-ca90b0880b88-config-volume\") pod \"collect-profiles-29415420-z9smc\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.333730 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj9x2\" (UniqueName: \"kubernetes.io/projected/beabb19d-470d-42a1-9db2-ca90b0880b88-kube-api-access-mj9x2\") pod \"collect-profiles-29415420-z9smc\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.334653 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beabb19d-470d-42a1-9db2-ca90b0880b88-config-volume\") pod \"collect-profiles-29415420-z9smc\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.339308 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/beabb19d-470d-42a1-9db2-ca90b0880b88-secret-volume\") pod \"collect-profiles-29415420-z9smc\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.350042 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj9x2\" (UniqueName: \"kubernetes.io/projected/beabb19d-470d-42a1-9db2-ca90b0880b88-kube-api-access-mj9x2\") pod \"collect-profiles-29415420-z9smc\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.509601 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:00 crc kubenswrapper[4780]: I1205 09:00:00.962596 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc"] Dec 05 09:00:01 crc kubenswrapper[4780]: I1205 09:00:01.982271 4780 generic.go:334] "Generic (PLEG): container finished" podID="beabb19d-470d-42a1-9db2-ca90b0880b88" containerID="486335de1cb1adfeda303488bf2bd868ebd3ff9a1235b6332261ffd7a0b7b78f" exitCode=0 Dec 05 09:00:01 crc kubenswrapper[4780]: I1205 09:00:01.982378 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" event={"ID":"beabb19d-470d-42a1-9db2-ca90b0880b88","Type":"ContainerDied","Data":"486335de1cb1adfeda303488bf2bd868ebd3ff9a1235b6332261ffd7a0b7b78f"} Dec 05 09:00:01 crc kubenswrapper[4780]: I1205 09:00:01.982900 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" event={"ID":"beabb19d-470d-42a1-9db2-ca90b0880b88","Type":"ContainerStarted","Data":"bd9c4aa35157278a28c8b3167a403cc8e5996e4ee53a22535379f6bb9530755f"} Dec 05 09:00:03 crc kubenswrapper[4780]: I1205 09:00:03.338976 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:03 crc kubenswrapper[4780]: I1205 09:00:03.398006 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beabb19d-470d-42a1-9db2-ca90b0880b88-config-volume\") pod \"beabb19d-470d-42a1-9db2-ca90b0880b88\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " Dec 05 09:00:03 crc kubenswrapper[4780]: I1205 09:00:03.398076 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/beabb19d-470d-42a1-9db2-ca90b0880b88-secret-volume\") pod \"beabb19d-470d-42a1-9db2-ca90b0880b88\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " Dec 05 09:00:03 crc kubenswrapper[4780]: I1205 09:00:03.398138 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj9x2\" (UniqueName: \"kubernetes.io/projected/beabb19d-470d-42a1-9db2-ca90b0880b88-kube-api-access-mj9x2\") pod \"beabb19d-470d-42a1-9db2-ca90b0880b88\" (UID: \"beabb19d-470d-42a1-9db2-ca90b0880b88\") " Dec 05 09:00:03 crc kubenswrapper[4780]: I1205 09:00:03.398938 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beabb19d-470d-42a1-9db2-ca90b0880b88-config-volume" (OuterVolumeSpecName: "config-volume") pod "beabb19d-470d-42a1-9db2-ca90b0880b88" (UID: "beabb19d-470d-42a1-9db2-ca90b0880b88"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 09:00:03 crc kubenswrapper[4780]: I1205 09:00:03.404144 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beabb19d-470d-42a1-9db2-ca90b0880b88-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "beabb19d-470d-42a1-9db2-ca90b0880b88" (UID: "beabb19d-470d-42a1-9db2-ca90b0880b88"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:00:03 crc kubenswrapper[4780]: I1205 09:00:03.404231 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beabb19d-470d-42a1-9db2-ca90b0880b88-kube-api-access-mj9x2" (OuterVolumeSpecName: "kube-api-access-mj9x2") pod "beabb19d-470d-42a1-9db2-ca90b0880b88" (UID: "beabb19d-470d-42a1-9db2-ca90b0880b88"). InnerVolumeSpecName "kube-api-access-mj9x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:00:03 crc kubenswrapper[4780]: I1205 09:00:03.500122 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beabb19d-470d-42a1-9db2-ca90b0880b88-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 09:00:03 crc kubenswrapper[4780]: I1205 09:00:03.500158 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/beabb19d-470d-42a1-9db2-ca90b0880b88-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 09:00:03 crc kubenswrapper[4780]: I1205 09:00:03.500171 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj9x2\" (UniqueName: \"kubernetes.io/projected/beabb19d-470d-42a1-9db2-ca90b0880b88-kube-api-access-mj9x2\") on node \"crc\" DevicePath \"\"" Dec 05 09:00:04 crc kubenswrapper[4780]: I1205 09:00:04.004826 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" event={"ID":"beabb19d-470d-42a1-9db2-ca90b0880b88","Type":"ContainerDied","Data":"bd9c4aa35157278a28c8b3167a403cc8e5996e4ee53a22535379f6bb9530755f"} Dec 05 09:00:04 crc kubenswrapper[4780]: I1205 09:00:04.004863 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd9c4aa35157278a28c8b3167a403cc8e5996e4ee53a22535379f6bb9530755f" Dec 05 09:00:04 crc kubenswrapper[4780]: I1205 09:00:04.004894 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc" Dec 05 09:00:04 crc kubenswrapper[4780]: I1205 09:00:04.406355 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd"] Dec 05 09:00:04 crc kubenswrapper[4780]: I1205 09:00:04.415690 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415375-2vbrd"] Dec 05 09:00:06 crc kubenswrapper[4780]: I1205 09:00:06.159871 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ae8a90-29c1-4480-ad3f-732a13612443" path="/var/lib/kubelet/pods/13ae8a90-29c1-4480-ad3f-732a13612443/volumes" Dec 05 09:00:35 crc kubenswrapper[4780]: I1205 09:00:35.437664 4780 scope.go:117] "RemoveContainer" containerID="bbded386c637973b1313c0218620a8e2d137c57d7573ad1a375e78f0d02ac279" Dec 05 09:00:59 crc kubenswrapper[4780]: I1205 09:00:59.907652 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:00:59 crc kubenswrapper[4780]: I1205 09:00:59.908241 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.154431 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29415421-7jkhn"] Dec 05 09:01:00 crc kubenswrapper[4780]: E1205 09:01:00.154750 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beabb19d-470d-42a1-9db2-ca90b0880b88" containerName="collect-profiles" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.154768 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="beabb19d-470d-42a1-9db2-ca90b0880b88" containerName="collect-profiles" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.155031 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="beabb19d-470d-42a1-9db2-ca90b0880b88" containerName="collect-profiles" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.155762 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.160722 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29415421-7jkhn"] Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.310874 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pmmd\" (UniqueName: \"kubernetes.io/projected/8a7da00b-061e-4af6-883c-57fd7deb39e1-kube-api-access-7pmmd\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.311229 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-fernet-keys\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.311336 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-config-data\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.311494 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-combined-ca-bundle\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.413951 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-fernet-keys\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.414018 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-config-data\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.414077 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-combined-ca-bundle\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.414265 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pmmd\" (UniqueName: \"kubernetes.io/projected/8a7da00b-061e-4af6-883c-57fd7deb39e1-kube-api-access-7pmmd\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.420494 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-combined-ca-bundle\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.420525 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-fernet-keys\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.420841 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-config-data\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.432493 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pmmd\" (UniqueName: \"kubernetes.io/projected/8a7da00b-061e-4af6-883c-57fd7deb39e1-kube-api-access-7pmmd\") pod \"keystone-cron-29415421-7jkhn\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.486594 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:00 crc kubenswrapper[4780]: I1205 09:01:00.995383 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29415421-7jkhn"] Dec 05 09:01:01 crc kubenswrapper[4780]: I1205 09:01:01.539438 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415421-7jkhn" event={"ID":"8a7da00b-061e-4af6-883c-57fd7deb39e1","Type":"ContainerStarted","Data":"f1a87878b2dace9aebb3370f25ac10c5297829e44367ad94f4689d82a2070162"} Dec 05 09:01:01 crc kubenswrapper[4780]: I1205 09:01:01.540082 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415421-7jkhn" event={"ID":"8a7da00b-061e-4af6-883c-57fd7deb39e1","Type":"ContainerStarted","Data":"44d31bb2acc142b62428f8f804e98c41ea9a4c7712882621373fe4d1527008fb"} Dec 05 09:01:01 crc kubenswrapper[4780]: I1205 09:01:01.564273 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29415421-7jkhn" podStartSLOduration=1.564252049 podStartE2EDuration="1.564252049s" podCreationTimestamp="2025-12-05 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 09:01:01.555327462 +0000 UTC m=+8095.624843794" watchObservedRunningTime="2025-12-05 09:01:01.564252049 +0000 UTC m=+8095.633768401" Dec 05 09:01:04 crc kubenswrapper[4780]: I1205 09:01:04.566617 4780 generic.go:334] "Generic (PLEG): container finished" podID="8a7da00b-061e-4af6-883c-57fd7deb39e1" containerID="f1a87878b2dace9aebb3370f25ac10c5297829e44367ad94f4689d82a2070162" exitCode=0 Dec 05 09:01:04 crc kubenswrapper[4780]: I1205 09:01:04.566707 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415421-7jkhn" event={"ID":"8a7da00b-061e-4af6-883c-57fd7deb39e1","Type":"ContainerDied","Data":"f1a87878b2dace9aebb3370f25ac10c5297829e44367ad94f4689d82a2070162"} Dec 05 09:01:05 crc kubenswrapper[4780]: I1205 09:01:05.916268 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.021495 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-fernet-keys\") pod \"8a7da00b-061e-4af6-883c-57fd7deb39e1\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.021613 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-config-data\") pod \"8a7da00b-061e-4af6-883c-57fd7deb39e1\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.021671 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-combined-ca-bundle\") pod \"8a7da00b-061e-4af6-883c-57fd7deb39e1\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.021855 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pmmd\" (UniqueName: \"kubernetes.io/projected/8a7da00b-061e-4af6-883c-57fd7deb39e1-kube-api-access-7pmmd\") pod \"8a7da00b-061e-4af6-883c-57fd7deb39e1\" (UID: \"8a7da00b-061e-4af6-883c-57fd7deb39e1\") " Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.027962 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8a7da00b-061e-4af6-883c-57fd7deb39e1" (UID: "8a7da00b-061e-4af6-883c-57fd7deb39e1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.031089 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7da00b-061e-4af6-883c-57fd7deb39e1-kube-api-access-7pmmd" (OuterVolumeSpecName: "kube-api-access-7pmmd") pod "8a7da00b-061e-4af6-883c-57fd7deb39e1" (UID: "8a7da00b-061e-4af6-883c-57fd7deb39e1"). InnerVolumeSpecName "kube-api-access-7pmmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.050222 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a7da00b-061e-4af6-883c-57fd7deb39e1" (UID: "8a7da00b-061e-4af6-883c-57fd7deb39e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.077511 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-config-data" (OuterVolumeSpecName: "config-data") pod "8a7da00b-061e-4af6-883c-57fd7deb39e1" (UID: "8a7da00b-061e-4af6-883c-57fd7deb39e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.124005 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.124061 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.124077 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pmmd\" (UniqueName: \"kubernetes.io/projected/8a7da00b-061e-4af6-883c-57fd7deb39e1-kube-api-access-7pmmd\") on node \"crc\" DevicePath \"\"" Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.124088 4780 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a7da00b-061e-4af6-883c-57fd7deb39e1-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.592630 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415421-7jkhn" event={"ID":"8a7da00b-061e-4af6-883c-57fd7deb39e1","Type":"ContainerDied","Data":"44d31bb2acc142b62428f8f804e98c41ea9a4c7712882621373fe4d1527008fb"} Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.593019 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44d31bb2acc142b62428f8f804e98c41ea9a4c7712882621373fe4d1527008fb" Dec 05 09:01:06 crc kubenswrapper[4780]: I1205 09:01:06.592672 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415421-7jkhn" Dec 05 09:01:29 crc kubenswrapper[4780]: I1205 09:01:29.908371 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:01:29 crc kubenswrapper[4780]: I1205 09:01:29.908998 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:01:42 crc kubenswrapper[4780]: I1205 09:01:42.836916 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qvvzm"] Dec 05 09:01:42 crc kubenswrapper[4780]: E1205 09:01:42.837895 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7da00b-061e-4af6-883c-57fd7deb39e1" containerName="keystone-cron" Dec 05 09:01:42 crc kubenswrapper[4780]: I1205 09:01:42.837911 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7da00b-061e-4af6-883c-57fd7deb39e1" containerName="keystone-cron" Dec 05 09:01:42 crc kubenswrapper[4780]: I1205 09:01:42.838116 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7da00b-061e-4af6-883c-57fd7deb39e1" containerName="keystone-cron" Dec 05 09:01:42 crc kubenswrapper[4780]: I1205 09:01:42.839670 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:42 crc kubenswrapper[4780]: I1205 09:01:42.850849 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvvzm"] Dec 05 09:01:42 crc kubenswrapper[4780]: I1205 09:01:42.901578 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-catalog-content\") pod \"redhat-operators-qvvzm\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:42 crc kubenswrapper[4780]: I1205 09:01:42.901624 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-utilities\") pod \"redhat-operators-qvvzm\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:42 crc kubenswrapper[4780]: I1205 09:01:42.901838 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-254fz\" (UniqueName: \"kubernetes.io/projected/0650072f-933b-402a-93fa-e2df4a081864-kube-api-access-254fz\") pod \"redhat-operators-qvvzm\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.003890 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-catalog-content\") pod \"redhat-operators-qvvzm\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.004211 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-utilities\") pod \"redhat-operators-qvvzm\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.004308 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-254fz\" (UniqueName: \"kubernetes.io/projected/0650072f-933b-402a-93fa-e2df4a081864-kube-api-access-254fz\") pod \"redhat-operators-qvvzm\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.004545 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-catalog-content\") pod \"redhat-operators-qvvzm\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.004640 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-utilities\") pod \"redhat-operators-qvvzm\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.033745 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-254fz\" (UniqueName: \"kubernetes.io/projected/0650072f-933b-402a-93fa-e2df4a081864-kube-api-access-254fz\") pod \"redhat-operators-qvvzm\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.165872 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.632870 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvvzm"] Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.963189 4780 generic.go:334] "Generic (PLEG): container finished" podID="0650072f-933b-402a-93fa-e2df4a081864" containerID="a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5" exitCode=0 Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.963249 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvzm" event={"ID":"0650072f-933b-402a-93fa-e2df4a081864","Type":"ContainerDied","Data":"a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5"} Dec 05 09:01:43 crc kubenswrapper[4780]: I1205 09:01:43.963279 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvzm" event={"ID":"0650072f-933b-402a-93fa-e2df4a081864","Type":"ContainerStarted","Data":"9b11d142e98d91508933dcfed92b280b8e12d8eddfce144765492f9883824975"} Dec 05 09:01:44 crc kubenswrapper[4780]: I1205 09:01:44.974493 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvzm" event={"ID":"0650072f-933b-402a-93fa-e2df4a081864","Type":"ContainerStarted","Data":"2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5"} Dec 05 09:01:48 crc kubenswrapper[4780]: I1205 09:01:48.002900 4780 generic.go:334] "Generic (PLEG): container finished" podID="0650072f-933b-402a-93fa-e2df4a081864" containerID="2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5" exitCode=0 Dec 05 09:01:48 crc kubenswrapper[4780]: I1205 09:01:48.002990 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvzm" event={"ID":"0650072f-933b-402a-93fa-e2df4a081864","Type":"ContainerDied","Data":"2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5"} Dec 05 09:01:49 crc kubenswrapper[4780]: I1205 09:01:49.013491 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvzm" event={"ID":"0650072f-933b-402a-93fa-e2df4a081864","Type":"ContainerStarted","Data":"02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce"} Dec 05 09:01:49 crc kubenswrapper[4780]: I1205 09:01:49.039104 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qvvzm" podStartSLOduration=2.611462789 podStartE2EDuration="7.038833359s" podCreationTimestamp="2025-12-05 09:01:42 +0000 UTC" firstStartedPulling="2025-12-05 09:01:43.965131157 +0000 UTC m=+8138.034647489" lastFinishedPulling="2025-12-05 09:01:48.392501727 +0000 UTC m=+8142.462018059" observedRunningTime="2025-12-05 09:01:49.028986318 +0000 UTC m=+8143.098502640" watchObservedRunningTime="2025-12-05 09:01:49.038833359 +0000 UTC m=+8143.108349691" Dec 05 09:01:53 crc kubenswrapper[4780]: I1205 09:01:53.167079 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:53 crc kubenswrapper[4780]: I1205 09:01:53.168194 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:01:54 crc kubenswrapper[4780]: I1205 09:01:54.216994 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qvvzm" podUID="0650072f-933b-402a-93fa-e2df4a081864" containerName="registry-server" probeResult="failure" output=< Dec 05 09:01:54 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Dec 05 09:01:54 crc kubenswrapper[4780]: > Dec 05 09:01:59 crc kubenswrapper[4780]: I1205 09:01:59.907401 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:01:59 crc kubenswrapper[4780]: I1205 09:01:59.907949 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:01:59 crc kubenswrapper[4780]: I1205 09:01:59.907995 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 09:01:59 crc kubenswrapper[4780]: I1205 09:01:59.908719 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 09:01:59 crc kubenswrapper[4780]: I1205 09:01:59.908762 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" gracePeriod=600 Dec 05 09:02:00 crc kubenswrapper[4780]: E1205 09:02:00.029539 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:02:00 crc kubenswrapper[4780]: I1205 09:02:00.106201 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" exitCode=0 Dec 05 09:02:00 crc kubenswrapper[4780]: I1205 09:02:00.106250 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1"} Dec 05 09:02:00 crc kubenswrapper[4780]: I1205 09:02:00.106306 4780 scope.go:117] "RemoveContainer" containerID="68ee7fc36dd7ec6b15590ff20bd042f823d5d6e91a960e7981d60dcbeb906daa" Dec 05 09:02:00 crc kubenswrapper[4780]: I1205 09:02:00.107141 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:02:00 crc kubenswrapper[4780]: E1205 09:02:00.107456 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:02:03 crc kubenswrapper[4780]: I1205 09:02:03.219008 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:02:03 crc kubenswrapper[4780]: I1205 09:02:03.274334 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:02:03 crc kubenswrapper[4780]: I1205 09:02:03.455445 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvvzm"] Dec 05 09:02:05 crc kubenswrapper[4780]: I1205 09:02:05.158992 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qvvzm" podUID="0650072f-933b-402a-93fa-e2df4a081864" containerName="registry-server" containerID="cri-o://02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce" gracePeriod=2 Dec 05 09:02:05 crc kubenswrapper[4780]: E1205 09:02:05.384055 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice/crio-02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice/crio-conmon-02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce.scope\": RecentStats: unable to find data in memory cache]" Dec 05 09:02:05 crc kubenswrapper[4780]: I1205 09:02:05.713496 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:02:05 crc kubenswrapper[4780]: I1205 09:02:05.898486 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-254fz\" (UniqueName: \"kubernetes.io/projected/0650072f-933b-402a-93fa-e2df4a081864-kube-api-access-254fz\") pod \"0650072f-933b-402a-93fa-e2df4a081864\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " Dec 05 09:02:05 crc kubenswrapper[4780]: I1205 09:02:05.898868 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-catalog-content\") pod \"0650072f-933b-402a-93fa-e2df4a081864\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " Dec 05 09:02:05 crc kubenswrapper[4780]: I1205 09:02:05.898946 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-utilities\") pod \"0650072f-933b-402a-93fa-e2df4a081864\" (UID: \"0650072f-933b-402a-93fa-e2df4a081864\") " Dec 05 09:02:05 crc kubenswrapper[4780]: I1205 09:02:05.900095 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-utilities" (OuterVolumeSpecName: "utilities") pod "0650072f-933b-402a-93fa-e2df4a081864" (UID: "0650072f-933b-402a-93fa-e2df4a081864"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:02:05 crc kubenswrapper[4780]: I1205 09:02:05.906104 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0650072f-933b-402a-93fa-e2df4a081864-kube-api-access-254fz" (OuterVolumeSpecName: "kube-api-access-254fz") pod "0650072f-933b-402a-93fa-e2df4a081864" (UID: "0650072f-933b-402a-93fa-e2df4a081864"). InnerVolumeSpecName "kube-api-access-254fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.001673 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.001724 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-254fz\" (UniqueName: \"kubernetes.io/projected/0650072f-933b-402a-93fa-e2df4a081864-kube-api-access-254fz\") on node \"crc\" DevicePath \"\"" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.014613 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0650072f-933b-402a-93fa-e2df4a081864" (UID: "0650072f-933b-402a-93fa-e2df4a081864"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.104556 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0650072f-933b-402a-93fa-e2df4a081864-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.172455 4780 generic.go:334] "Generic (PLEG): container finished" podID="0650072f-933b-402a-93fa-e2df4a081864" containerID="02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce" exitCode=0 Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.172509 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvvzm" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.172539 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvzm" event={"ID":"0650072f-933b-402a-93fa-e2df4a081864","Type":"ContainerDied","Data":"02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce"} Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.172583 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvvzm" event={"ID":"0650072f-933b-402a-93fa-e2df4a081864","Type":"ContainerDied","Data":"9b11d142e98d91508933dcfed92b280b8e12d8eddfce144765492f9883824975"} Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.172608 4780 scope.go:117] "RemoveContainer" containerID="02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.205649 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvvzm"] Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.210048 4780 scope.go:117] "RemoveContainer" containerID="2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.217857 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qvvzm"] Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.238511 4780 scope.go:117] "RemoveContainer" containerID="a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.281170 4780 scope.go:117] "RemoveContainer" containerID="02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce" Dec 05 09:02:06 crc kubenswrapper[4780]: E1205 09:02:06.281577 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce\": container with ID starting with 02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce not found: ID does not exist" containerID="02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.281611 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce"} err="failed to get container status \"02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce\": rpc error: code = NotFound desc = could not find container \"02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce\": container with ID starting with 02eca28f88f90bd85a0656ba15729b037903c0d57d8547e1e2f351f8a392ecce not found: ID does not exist" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.281632 4780 scope.go:117] "RemoveContainer" containerID="2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5" Dec 05 09:02:06 crc kubenswrapper[4780]: E1205 09:02:06.282187 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5\": container with ID starting with 2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5 not found: ID does not exist" containerID="2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.282267 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5"} err="failed to get container status \"2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5\": rpc error: code = NotFound desc = could not find container \"2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5\": container with ID starting with 2574f10345a27450996bea7bbc2f95bf1bb66488ba9e0eeea49a8250f91d4ea5 not found: ID does not exist" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.282326 4780 scope.go:117] "RemoveContainer" containerID="a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5" Dec 05 09:02:06 crc kubenswrapper[4780]: E1205 09:02:06.282662 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5\": container with ID starting with a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5 not found: ID does not exist" containerID="a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5" Dec 05 09:02:06 crc kubenswrapper[4780]: I1205 09:02:06.282692 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5"} err="failed to get container status \"a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5\": rpc error: code = NotFound desc = could not find container \"a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5\": container with ID starting with a70974a0adf775cabd21565a4237003e56ccf58537f4662705842e314e2545e5 not found: ID does not exist" Dec 05 09:02:08 crc kubenswrapper[4780]: I1205 09:02:08.150991 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0650072f-933b-402a-93fa-e2df4a081864" path="/var/lib/kubelet/pods/0650072f-933b-402a-93fa-e2df4a081864/volumes" Dec 05 09:02:11 crc kubenswrapper[4780]: I1205 09:02:11.138853 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:02:11 crc kubenswrapper[4780]: E1205 09:02:11.139632 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:02:15 crc kubenswrapper[4780]: E1205 09:02:15.661261 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice/crio-9b11d142e98d91508933dcfed92b280b8e12d8eddfce144765492f9883824975\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice\": RecentStats: unable to find data in memory cache]" Dec 05 09:02:25 crc kubenswrapper[4780]: E1205 09:02:25.909305 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice/crio-9b11d142e98d91508933dcfed92b280b8e12d8eddfce144765492f9883824975\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice\": RecentStats: unable to find data in memory cache]" Dec 05 09:02:26 crc kubenswrapper[4780]: I1205 09:02:26.145874 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:02:26 crc kubenswrapper[4780]: E1205 09:02:26.146196 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:02:36 crc kubenswrapper[4780]: E1205 09:02:36.159513 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice/crio-9b11d142e98d91508933dcfed92b280b8e12d8eddfce144765492f9883824975\": RecentStats: unable to find data in memory cache]" Dec 05 09:02:37 crc kubenswrapper[4780]: I1205 09:02:37.139232 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:02:37 crc kubenswrapper[4780]: E1205 09:02:37.139889 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:02:37 crc kubenswrapper[4780]: I1205 09:02:37.466473 4780 generic.go:334] "Generic (PLEG): container finished" podID="80fafa83-0a64-48b0-9bd9-a5c59a344b8d" containerID="86a4d7805dac9fdd1c8d3fad440289c831decfb412429e72198405caf27747f5" exitCode=0 Dec 05 09:02:37 crc kubenswrapper[4780]: I1205 09:02:37.466521 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-88kbg" event={"ID":"80fafa83-0a64-48b0-9bd9-a5c59a344b8d","Type":"ContainerDied","Data":"86a4d7805dac9fdd1c8d3fad440289c831decfb412429e72198405caf27747f5"} Dec 05 09:02:38 crc kubenswrapper[4780]: I1205 09:02:38.999275 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.102304 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-0\") pod \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.102799 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-telemetry-combined-ca-bundle\") pod \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.102822 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-inventory\") pod \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.102843 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s796v\" (UniqueName: \"kubernetes.io/projected/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-kube-api-access-s796v\") pod \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.102900 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-1\") pod \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.102982 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ssh-key\") pod \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.103062 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-2\") pod \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\" (UID: \"80fafa83-0a64-48b0-9bd9-a5c59a344b8d\") " Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.107869 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "80fafa83-0a64-48b0-9bd9-a5c59a344b8d" (UID: "80fafa83-0a64-48b0-9bd9-a5c59a344b8d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.121327 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-kube-api-access-s796v" (OuterVolumeSpecName: "kube-api-access-s796v") pod "80fafa83-0a64-48b0-9bd9-a5c59a344b8d" (UID: "80fafa83-0a64-48b0-9bd9-a5c59a344b8d"). InnerVolumeSpecName "kube-api-access-s796v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.131095 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-inventory" (OuterVolumeSpecName: "inventory") pod "80fafa83-0a64-48b0-9bd9-a5c59a344b8d" (UID: "80fafa83-0a64-48b0-9bd9-a5c59a344b8d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.142408 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "80fafa83-0a64-48b0-9bd9-a5c59a344b8d" (UID: "80fafa83-0a64-48b0-9bd9-a5c59a344b8d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.143853 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "80fafa83-0a64-48b0-9bd9-a5c59a344b8d" (UID: "80fafa83-0a64-48b0-9bd9-a5c59a344b8d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.146131 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "80fafa83-0a64-48b0-9bd9-a5c59a344b8d" (UID: "80fafa83-0a64-48b0-9bd9-a5c59a344b8d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.160455 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "80fafa83-0a64-48b0-9bd9-a5c59a344b8d" (UID: "80fafa83-0a64-48b0-9bd9-a5c59a344b8d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.208519 4780 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.208561 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.208573 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s796v\" (UniqueName: \"kubernetes.io/projected/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-kube-api-access-s796v\") on node \"crc\" DevicePath \"\"" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.208583 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.208595 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.208607 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.208620 4780 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/80fafa83-0a64-48b0-9bd9-a5c59a344b8d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.486167 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-88kbg" event={"ID":"80fafa83-0a64-48b0-9bd9-a5c59a344b8d","Type":"ContainerDied","Data":"c32d9fce72f56de73749a9ee34a334a56303a270658f3a15145c664201bfa8ac"} Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.486448 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c32d9fce72f56de73749a9ee34a334a56303a270658f3a15145c664201bfa8ac" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.486368 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-88kbg" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.577571 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-mghjq"] Dec 05 09:02:39 crc kubenswrapper[4780]: E1205 09:02:39.578098 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0650072f-933b-402a-93fa-e2df4a081864" containerName="extract-utilities" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.578123 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0650072f-933b-402a-93fa-e2df4a081864" containerName="extract-utilities" Dec 05 09:02:39 crc kubenswrapper[4780]: E1205 09:02:39.578137 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0650072f-933b-402a-93fa-e2df4a081864" containerName="extract-content" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.578147 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0650072f-933b-402a-93fa-e2df4a081864" containerName="extract-content" Dec 05 09:02:39 crc kubenswrapper[4780]: E1205 09:02:39.578166 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0650072f-933b-402a-93fa-e2df4a081864" containerName="registry-server" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.578174 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0650072f-933b-402a-93fa-e2df4a081864" containerName="registry-server" Dec 05 09:02:39 crc kubenswrapper[4780]: E1205 09:02:39.578204 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fafa83-0a64-48b0-9bd9-a5c59a344b8d" containerName="telemetry-openstack-openstack-cell1" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.578212 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fafa83-0a64-48b0-9bd9-a5c59a344b8d" containerName="telemetry-openstack-openstack-cell1" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.578472 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fafa83-0a64-48b0-9bd9-a5c59a344b8d" containerName="telemetry-openstack-openstack-cell1" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.578497 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0650072f-933b-402a-93fa-e2df4a081864" containerName="registry-server" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.579616 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.582747 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.582943 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.583182 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.583327 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.583488 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.593646 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-mghjq"] Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.718503 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.718691 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65xhm\" (UniqueName: \"kubernetes.io/projected/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-kube-api-access-65xhm\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.718730 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.719013 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.719120 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.821303 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.821380 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.821403 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.821542 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65xhm\" (UniqueName: \"kubernetes.io/projected/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-kube-api-access-65xhm\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.821571 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.825441 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.825973 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.826794 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.829534 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.840002 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65xhm\" (UniqueName: \"kubernetes.io/projected/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-kube-api-access-65xhm\") pod \"neutron-sriov-openstack-openstack-cell1-mghjq\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:39 crc kubenswrapper[4780]: I1205 09:02:39.895935 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:02:40 crc kubenswrapper[4780]: I1205 09:02:40.587565 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-mghjq"] Dec 05 09:02:41 crc kubenswrapper[4780]: I1205 09:02:41.505780 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" event={"ID":"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b","Type":"ContainerStarted","Data":"1adc3142b33fc9d4d4cd617362f087004df729a1794742c2dae5346b161014a6"} Dec 05 09:02:41 crc kubenswrapper[4780]: I1205 09:02:41.506430 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" event={"ID":"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b","Type":"ContainerStarted","Data":"d3db0d57b1b530e21d48737fab272e10295890def7973fdacd3af2b61c81552a"} Dec 05 09:02:41 crc kubenswrapper[4780]: I1205 09:02:41.529288 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" podStartSLOduration=2.093241051 podStartE2EDuration="2.525876609s" podCreationTimestamp="2025-12-05 09:02:39 +0000 UTC" firstStartedPulling="2025-12-05 09:02:40.581952388 +0000 UTC m=+8194.651468720" lastFinishedPulling="2025-12-05 09:02:41.014587946 +0000 UTC m=+8195.084104278" observedRunningTime="2025-12-05 09:02:41.520608109 +0000 UTC m=+8195.590124471" watchObservedRunningTime="2025-12-05 09:02:41.525876609 +0000 UTC m=+8195.595392941" Dec 05 09:02:46 crc kubenswrapper[4780]: E1205 09:02:46.444787 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice/crio-9b11d142e98d91508933dcfed92b280b8e12d8eddfce144765492f9883824975\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice\": RecentStats: unable to find data in memory cache]" Dec 05 09:02:52 crc kubenswrapper[4780]: I1205 09:02:52.139089 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:02:52 crc kubenswrapper[4780]: E1205 09:02:52.139851 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:02:56 crc kubenswrapper[4780]: E1205 09:02:56.763423 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice/crio-9b11d142e98d91508933dcfed92b280b8e12d8eddfce144765492f9883824975\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0650072f_933b_402a_93fa_e2df4a081864.slice\": RecentStats: unable to find data in memory cache]" Dec 05 09:03:03 crc kubenswrapper[4780]: I1205 09:03:03.138560 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:03:03 crc kubenswrapper[4780]: E1205 09:03:03.139369 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:03:15 crc kubenswrapper[4780]: I1205 09:03:15.139209 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:03:15 crc kubenswrapper[4780]: E1205 09:03:15.140059 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:03:28 crc kubenswrapper[4780]: I1205 09:03:28.138778 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:03:28 crc kubenswrapper[4780]: E1205 09:03:28.139678 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:03:43 crc kubenswrapper[4780]: I1205 09:03:43.138732 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:03:43 crc kubenswrapper[4780]: E1205 09:03:43.139636 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:03:56 crc kubenswrapper[4780]: I1205 09:03:56.147669 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:03:56 crc kubenswrapper[4780]: E1205 09:03:56.148912 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:04:11 crc kubenswrapper[4780]: I1205 09:04:11.138741 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:04:11 crc kubenswrapper[4780]: E1205 09:04:11.139453 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:04:25 crc kubenswrapper[4780]: I1205 09:04:25.138857 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:04:25 crc kubenswrapper[4780]: E1205 09:04:25.139738 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:04:37 crc kubenswrapper[4780]: I1205 09:04:37.139574 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:04:37 crc kubenswrapper[4780]: E1205 09:04:37.140382 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:04:52 crc kubenswrapper[4780]: I1205 09:04:52.138832 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:04:52 crc kubenswrapper[4780]: E1205 09:04:52.139642 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:05:07 crc kubenswrapper[4780]: I1205 09:05:07.139848 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:05:07 crc kubenswrapper[4780]: E1205 09:05:07.140568 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:05:22 crc kubenswrapper[4780]: I1205 09:05:22.139014 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:05:22 crc kubenswrapper[4780]: E1205 09:05:22.139796 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:05:35 crc kubenswrapper[4780]: I1205 09:05:35.140042 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:05:35 crc kubenswrapper[4780]: E1205 09:05:35.141475 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:05:50 crc kubenswrapper[4780]: I1205 09:05:50.139859 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:05:50 crc kubenswrapper[4780]: E1205 09:05:50.142310 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:06:04 crc kubenswrapper[4780]: I1205 09:06:04.140396 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:06:04 crc kubenswrapper[4780]: E1205 09:06:04.141926 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:06:18 crc kubenswrapper[4780]: I1205 09:06:18.139222 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:06:18 crc kubenswrapper[4780]: E1205 09:06:18.140060 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:06:30 crc kubenswrapper[4780]: I1205 09:06:30.139852 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:06:30 crc kubenswrapper[4780]: E1205 09:06:30.140784 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:06:41 crc kubenswrapper[4780]: I1205 09:06:41.138928 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:06:41 crc kubenswrapper[4780]: E1205 09:06:41.139719 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:06:52 crc kubenswrapper[4780]: I1205 09:06:52.138999 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:06:52 crc kubenswrapper[4780]: E1205 09:06:52.142603 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:06:53 crc kubenswrapper[4780]: I1205 09:06:53.976503 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-47hn7"] Dec 05 09:06:53 crc kubenswrapper[4780]: I1205 09:06:53.980254 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:53 crc kubenswrapper[4780]: I1205 09:06:53.993529 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-47hn7"] Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.093346 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-catalog-content\") pod \"redhat-marketplace-47hn7\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.093551 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-utilities\") pod \"redhat-marketplace-47hn7\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.093683 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmjg8\" (UniqueName: \"kubernetes.io/projected/120d8d6c-e624-47d8-b005-5eebcb002b3f-kube-api-access-nmjg8\") pod \"redhat-marketplace-47hn7\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.196369 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-utilities\") pod \"redhat-marketplace-47hn7\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.196449 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmjg8\" (UniqueName: \"kubernetes.io/projected/120d8d6c-e624-47d8-b005-5eebcb002b3f-kube-api-access-nmjg8\") pod \"redhat-marketplace-47hn7\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.196754 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-catalog-content\") pod \"redhat-marketplace-47hn7\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.197263 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-utilities\") pod \"redhat-marketplace-47hn7\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.197290 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-catalog-content\") pod \"redhat-marketplace-47hn7\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.215736 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmjg8\" (UniqueName: \"kubernetes.io/projected/120d8d6c-e624-47d8-b005-5eebcb002b3f-kube-api-access-nmjg8\") pod \"redhat-marketplace-47hn7\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.308962 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.791517 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-47hn7"] Dec 05 09:06:54 crc kubenswrapper[4780]: I1205 09:06:54.937200 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47hn7" event={"ID":"120d8d6c-e624-47d8-b005-5eebcb002b3f","Type":"ContainerStarted","Data":"ab73f8b051800e31bb1a3098544d49e2a4831cf99274e83d4871c78a59e1599d"} Dec 05 09:06:56 crc kubenswrapper[4780]: I1205 09:06:56.957159 4780 generic.go:334] "Generic (PLEG): container finished" podID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerID="2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65" exitCode=0 Dec 05 09:06:56 crc kubenswrapper[4780]: I1205 09:06:56.957213 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47hn7" event={"ID":"120d8d6c-e624-47d8-b005-5eebcb002b3f","Type":"ContainerDied","Data":"2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65"} Dec 05 09:06:56 crc kubenswrapper[4780]: I1205 09:06:56.958850 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 09:07:01 crc kubenswrapper[4780]: I1205 09:07:01.002723 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47hn7" event={"ID":"120d8d6c-e624-47d8-b005-5eebcb002b3f","Type":"ContainerStarted","Data":"f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47"} Dec 05 09:07:02 crc kubenswrapper[4780]: I1205 09:07:02.012419 4780 generic.go:334] "Generic (PLEG): container finished" podID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerID="f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47" exitCode=0 Dec 05 09:07:02 crc kubenswrapper[4780]: I1205 09:07:02.012471 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47hn7" event={"ID":"120d8d6c-e624-47d8-b005-5eebcb002b3f","Type":"ContainerDied","Data":"f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47"} Dec 05 09:07:03 crc kubenswrapper[4780]: I1205 09:07:03.035161 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47hn7" event={"ID":"120d8d6c-e624-47d8-b005-5eebcb002b3f","Type":"ContainerStarted","Data":"5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017"} Dec 05 09:07:03 crc kubenswrapper[4780]: I1205 09:07:03.060495 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-47hn7" podStartSLOduration=4.606945738 podStartE2EDuration="10.060474376s" podCreationTimestamp="2025-12-05 09:06:53 +0000 UTC" firstStartedPulling="2025-12-05 09:06:56.958653959 +0000 UTC m=+8451.028170281" lastFinishedPulling="2025-12-05 09:07:02.412182587 +0000 UTC m=+8456.481698919" observedRunningTime="2025-12-05 09:07:03.056711134 +0000 UTC m=+8457.126227476" watchObservedRunningTime="2025-12-05 09:07:03.060474376 +0000 UTC m=+8457.129990708" Dec 05 09:07:04 crc kubenswrapper[4780]: I1205 09:07:04.309947 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:07:04 crc kubenswrapper[4780]: I1205 09:07:04.310284 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:07:04 crc kubenswrapper[4780]: I1205 09:07:04.354825 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:07:07 crc kubenswrapper[4780]: I1205 09:07:07.138804 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:07:08 crc kubenswrapper[4780]: I1205 09:07:08.086701 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"5f91664f6ca8c45a9b3a807e5d3fe4efbd331835e0125c31ee473500936408cc"} Dec 05 09:07:14 crc kubenswrapper[4780]: I1205 09:07:14.356979 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:07:14 crc kubenswrapper[4780]: I1205 09:07:14.424176 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-47hn7"] Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.142095 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-47hn7" podUID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerName="registry-server" containerID="cri-o://5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017" gracePeriod=2 Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.575316 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.767277 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-utilities\") pod \"120d8d6c-e624-47d8-b005-5eebcb002b3f\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.767343 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmjg8\" (UniqueName: \"kubernetes.io/projected/120d8d6c-e624-47d8-b005-5eebcb002b3f-kube-api-access-nmjg8\") pod \"120d8d6c-e624-47d8-b005-5eebcb002b3f\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.767580 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-catalog-content\") pod \"120d8d6c-e624-47d8-b005-5eebcb002b3f\" (UID: \"120d8d6c-e624-47d8-b005-5eebcb002b3f\") " Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.768838 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-utilities" (OuterVolumeSpecName: "utilities") pod "120d8d6c-e624-47d8-b005-5eebcb002b3f" (UID: "120d8d6c-e624-47d8-b005-5eebcb002b3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.774388 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120d8d6c-e624-47d8-b005-5eebcb002b3f-kube-api-access-nmjg8" (OuterVolumeSpecName: "kube-api-access-nmjg8") pod "120d8d6c-e624-47d8-b005-5eebcb002b3f" (UID: "120d8d6c-e624-47d8-b005-5eebcb002b3f"). InnerVolumeSpecName "kube-api-access-nmjg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.789702 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "120d8d6c-e624-47d8-b005-5eebcb002b3f" (UID: "120d8d6c-e624-47d8-b005-5eebcb002b3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.869976 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.870261 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmjg8\" (UniqueName: \"kubernetes.io/projected/120d8d6c-e624-47d8-b005-5eebcb002b3f-kube-api-access-nmjg8\") on node \"crc\" DevicePath \"\"" Dec 05 09:07:15 crc kubenswrapper[4780]: I1205 09:07:15.870324 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/120d8d6c-e624-47d8-b005-5eebcb002b3f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.159225 4780 generic.go:334] "Generic (PLEG): container finished" podID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerID="5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017" exitCode=0 Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.159349 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47hn7" Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.163075 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47hn7" event={"ID":"120d8d6c-e624-47d8-b005-5eebcb002b3f","Type":"ContainerDied","Data":"5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017"} Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.163159 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47hn7" event={"ID":"120d8d6c-e624-47d8-b005-5eebcb002b3f","Type":"ContainerDied","Data":"ab73f8b051800e31bb1a3098544d49e2a4831cf99274e83d4871c78a59e1599d"} Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.163185 4780 scope.go:117] "RemoveContainer" containerID="5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017" Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.185259 4780 scope.go:117] "RemoveContainer" containerID="f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47" Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.207018 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-47hn7"] Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.221026 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-47hn7"] Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.222334 4780 scope.go:117] "RemoveContainer" containerID="2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65" Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.267697 4780 scope.go:117] "RemoveContainer" containerID="5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017" Dec 05 09:07:16 crc kubenswrapper[4780]: E1205 09:07:16.268263 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017\": container with ID starting with 5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017 not found: ID does not exist" containerID="5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017" Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.268302 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017"} err="failed to get container status \"5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017\": rpc error: code = NotFound desc = could not find container \"5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017\": container with ID starting with 5ce661b98f9c2262647b37dfe0ddc394f9b9679d78b6a7d867dc8250f25b1017 not found: ID does not exist" Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.268329 4780 scope.go:117] "RemoveContainer" containerID="f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47" Dec 05 09:07:16 crc kubenswrapper[4780]: E1205 09:07:16.268895 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47\": container with ID starting with f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47 not found: ID does not exist" containerID="f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47" Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.268918 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47"} err="failed to get container status \"f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47\": rpc error: code = NotFound desc = could not find container \"f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47\": container with ID starting with f688707b10d31e204effa600d0825b114a83e3dacfae229a35a11778d1edfb47 not found: ID does not exist" Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.268935 4780 scope.go:117] "RemoveContainer" containerID="2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65" Dec 05 09:07:16 crc kubenswrapper[4780]: E1205 09:07:16.269343 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65\": container with ID starting with 2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65 not found: ID does not exist" containerID="2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65" Dec 05 09:07:16 crc kubenswrapper[4780]: I1205 09:07:16.269367 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65"} err="failed to get container status \"2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65\": rpc error: code = NotFound desc = could not find container \"2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65\": container with ID starting with 2e6bbd7c612c24f40c2b58d05deb3ed150415410a79ddd263aeec0ec38f3cd65 not found: ID does not exist" Dec 05 09:07:18 crc kubenswrapper[4780]: I1205 09:07:18.149514 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120d8d6c-e624-47d8-b005-5eebcb002b3f" path="/var/lib/kubelet/pods/120d8d6c-e624-47d8-b005-5eebcb002b3f/volumes" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.117994 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h8wfj"] Dec 05 09:07:57 crc kubenswrapper[4780]: E1205 09:07:57.119180 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerName="extract-content" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.119201 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerName="extract-content" Dec 05 09:07:57 crc kubenswrapper[4780]: E1205 09:07:57.119219 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerName="registry-server" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.119226 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerName="registry-server" Dec 05 09:07:57 crc kubenswrapper[4780]: E1205 09:07:57.119235 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerName="extract-utilities" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.119243 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerName="extract-utilities" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.119506 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="120d8d6c-e624-47d8-b005-5eebcb002b3f" containerName="registry-server" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.121702 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.131699 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8wfj"] Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.307688 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sz4c\" (UniqueName: \"kubernetes.io/projected/12cc8e8a-35f7-4d25-89d3-75418ffc2713-kube-api-access-8sz4c\") pod \"certified-operators-h8wfj\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.307865 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-utilities\") pod \"certified-operators-h8wfj\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.308130 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-catalog-content\") pod \"certified-operators-h8wfj\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.313754 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6jqst"] Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.315764 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.332692 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jqst"] Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.410223 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-utilities\") pod \"certified-operators-h8wfj\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.410406 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-catalog-content\") pod \"certified-operators-h8wfj\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.410456 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b276c8de-e39f-4b60-a6bc-d08e9085e7c4-utilities\") pod \"community-operators-6jqst\" (UID: \"b276c8de-e39f-4b60-a6bc-d08e9085e7c4\") " pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.410492 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sz4c\" (UniqueName: \"kubernetes.io/projected/12cc8e8a-35f7-4d25-89d3-75418ffc2713-kube-api-access-8sz4c\") pod \"certified-operators-h8wfj\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.410519 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b276c8de-e39f-4b60-a6bc-d08e9085e7c4-catalog-content\") pod \"community-operators-6jqst\" (UID: \"b276c8de-e39f-4b60-a6bc-d08e9085e7c4\") " pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.410681 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-utilities\") pod \"certified-operators-h8wfj\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.410720 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t47rs\" (UniqueName: \"kubernetes.io/projected/b276c8de-e39f-4b60-a6bc-d08e9085e7c4-kube-api-access-t47rs\") pod \"community-operators-6jqst\" (UID: \"b276c8de-e39f-4b60-a6bc-d08e9085e7c4\") " pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.410745 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-catalog-content\") pod \"certified-operators-h8wfj\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.431085 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sz4c\" (UniqueName: \"kubernetes.io/projected/12cc8e8a-35f7-4d25-89d3-75418ffc2713-kube-api-access-8sz4c\") pod \"certified-operators-h8wfj\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.442843 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.516871 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b276c8de-e39f-4b60-a6bc-d08e9085e7c4-utilities\") pod \"community-operators-6jqst\" (UID: \"b276c8de-e39f-4b60-a6bc-d08e9085e7c4\") " pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.517315 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b276c8de-e39f-4b60-a6bc-d08e9085e7c4-catalog-content\") pod \"community-operators-6jqst\" (UID: \"b276c8de-e39f-4b60-a6bc-d08e9085e7c4\") " pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.517397 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t47rs\" (UniqueName: \"kubernetes.io/projected/b276c8de-e39f-4b60-a6bc-d08e9085e7c4-kube-api-access-t47rs\") pod \"community-operators-6jqst\" (UID: \"b276c8de-e39f-4b60-a6bc-d08e9085e7c4\") " pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.518552 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b276c8de-e39f-4b60-a6bc-d08e9085e7c4-utilities\") pod \"community-operators-6jqst\" (UID: \"b276c8de-e39f-4b60-a6bc-d08e9085e7c4\") " pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.518830 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b276c8de-e39f-4b60-a6bc-d08e9085e7c4-catalog-content\") pod \"community-operators-6jqst\" (UID: \"b276c8de-e39f-4b60-a6bc-d08e9085e7c4\") " pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.543067 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t47rs\" (UniqueName: \"kubernetes.io/projected/b276c8de-e39f-4b60-a6bc-d08e9085e7c4-kube-api-access-t47rs\") pod \"community-operators-6jqst\" (UID: \"b276c8de-e39f-4b60-a6bc-d08e9085e7c4\") " pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:57 crc kubenswrapper[4780]: I1205 09:07:57.637056 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:07:58 crc kubenswrapper[4780]: I1205 09:07:58.085555 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h8wfj"] Dec 05 09:07:58 crc kubenswrapper[4780]: I1205 09:07:58.099929 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jqst"] Dec 05 09:07:58 crc kubenswrapper[4780]: I1205 09:07:58.550625 4780 generic.go:334] "Generic (PLEG): container finished" podID="b276c8de-e39f-4b60-a6bc-d08e9085e7c4" containerID="abacc5237f85026c3b0db954a7f1d521c45df4141c7aa9d6a6f2b66147a732a6" exitCode=0 Dec 05 09:07:58 crc kubenswrapper[4780]: I1205 09:07:58.550697 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqst" event={"ID":"b276c8de-e39f-4b60-a6bc-d08e9085e7c4","Type":"ContainerDied","Data":"abacc5237f85026c3b0db954a7f1d521c45df4141c7aa9d6a6f2b66147a732a6"} Dec 05 09:07:58 crc kubenswrapper[4780]: I1205 09:07:58.550989 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqst" event={"ID":"b276c8de-e39f-4b60-a6bc-d08e9085e7c4","Type":"ContainerStarted","Data":"9c792c939d36352f972b13a6f31d70f7c200e719e6e196a7ee9488a4e342c6b4"} Dec 05 09:07:58 crc kubenswrapper[4780]: I1205 09:07:58.560262 4780 generic.go:334] "Generic (PLEG): container finished" podID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerID="0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa" exitCode=0 Dec 05 09:07:58 crc kubenswrapper[4780]: I1205 09:07:58.560326 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8wfj" event={"ID":"12cc8e8a-35f7-4d25-89d3-75418ffc2713","Type":"ContainerDied","Data":"0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa"} Dec 05 09:07:58 crc kubenswrapper[4780]: I1205 09:07:58.560369 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8wfj" event={"ID":"12cc8e8a-35f7-4d25-89d3-75418ffc2713","Type":"ContainerStarted","Data":"2498dac4cc7b442136c9c703e897f4acb8987e7ff1668c147fefd295872c9e2e"} Dec 05 09:07:59 crc kubenswrapper[4780]: I1205 09:07:59.580041 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8wfj" event={"ID":"12cc8e8a-35f7-4d25-89d3-75418ffc2713","Type":"ContainerStarted","Data":"a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf"} Dec 05 09:08:00 crc kubenswrapper[4780]: I1205 09:08:00.592331 4780 generic.go:334] "Generic (PLEG): container finished" podID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerID="a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf" exitCode=0 Dec 05 09:08:00 crc kubenswrapper[4780]: I1205 09:08:00.592399 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8wfj" event={"ID":"12cc8e8a-35f7-4d25-89d3-75418ffc2713","Type":"ContainerDied","Data":"a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf"} Dec 05 09:08:04 crc kubenswrapper[4780]: I1205 09:08:04.640060 4780 generic.go:334] "Generic (PLEG): container finished" podID="b276c8de-e39f-4b60-a6bc-d08e9085e7c4" containerID="d8a9f04aa69cab6f46557353ebd2738b2daa3e2311c84e69ed9cef79b9ee3670" exitCode=0 Dec 05 09:08:04 crc kubenswrapper[4780]: I1205 09:08:04.642983 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqst" event={"ID":"b276c8de-e39f-4b60-a6bc-d08e9085e7c4","Type":"ContainerDied","Data":"d8a9f04aa69cab6f46557353ebd2738b2daa3e2311c84e69ed9cef79b9ee3670"} Dec 05 09:08:04 crc kubenswrapper[4780]: I1205 09:08:04.654624 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8wfj" event={"ID":"12cc8e8a-35f7-4d25-89d3-75418ffc2713","Type":"ContainerStarted","Data":"92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079"} Dec 05 09:08:04 crc kubenswrapper[4780]: I1205 09:08:04.706335 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h8wfj" podStartSLOduration=2.47348582 podStartE2EDuration="7.706317105s" podCreationTimestamp="2025-12-05 09:07:57 +0000 UTC" firstStartedPulling="2025-12-05 09:07:58.562916674 +0000 UTC m=+8512.632433006" lastFinishedPulling="2025-12-05 09:08:03.795747959 +0000 UTC m=+8517.865264291" observedRunningTime="2025-12-05 09:08:04.70392874 +0000 UTC m=+8518.773445072" watchObservedRunningTime="2025-12-05 09:08:04.706317105 +0000 UTC m=+8518.775833437" Dec 05 09:08:05 crc kubenswrapper[4780]: I1205 09:08:05.676683 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqst" event={"ID":"b276c8de-e39f-4b60-a6bc-d08e9085e7c4","Type":"ContainerStarted","Data":"16164ad04c5afd749b79708dbc49adda63d7ea8515313d404ae5ddc6bb1f7b94"} Dec 05 09:08:05 crc kubenswrapper[4780]: I1205 09:08:05.707673 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6jqst" podStartSLOduration=2.171007636 podStartE2EDuration="8.707649793s" podCreationTimestamp="2025-12-05 09:07:57 +0000 UTC" firstStartedPulling="2025-12-05 09:07:58.552254286 +0000 UTC m=+8512.621770618" lastFinishedPulling="2025-12-05 09:08:05.088896443 +0000 UTC m=+8519.158412775" observedRunningTime="2025-12-05 09:08:05.696758089 +0000 UTC m=+8519.766274421" watchObservedRunningTime="2025-12-05 09:08:05.707649793 +0000 UTC m=+8519.777166115" Dec 05 09:08:07 crc kubenswrapper[4780]: I1205 09:08:07.443842 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:08:07 crc kubenswrapper[4780]: I1205 09:08:07.444774 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:08:07 crc kubenswrapper[4780]: I1205 09:08:07.487799 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:08:07 crc kubenswrapper[4780]: I1205 09:08:07.637517 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:08:07 crc kubenswrapper[4780]: I1205 09:08:07.637569 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:08:08 crc kubenswrapper[4780]: I1205 09:08:08.681194 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6jqst" podUID="b276c8de-e39f-4b60-a6bc-d08e9085e7c4" containerName="registry-server" probeResult="failure" output=< Dec 05 09:08:08 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Dec 05 09:08:08 crc kubenswrapper[4780]: > Dec 05 09:08:17 crc kubenswrapper[4780]: I1205 09:08:17.497249 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:08:17 crc kubenswrapper[4780]: I1205 09:08:17.553381 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8wfj"] Dec 05 09:08:17 crc kubenswrapper[4780]: I1205 09:08:17.688378 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:08:17 crc kubenswrapper[4780]: I1205 09:08:17.741859 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6jqst" Dec 05 09:08:17 crc kubenswrapper[4780]: I1205 09:08:17.811725 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h8wfj" podUID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerName="registry-server" containerID="cri-o://92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079" gracePeriod=2 Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.306936 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.398921 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-catalog-content\") pod \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.399295 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sz4c\" (UniqueName: \"kubernetes.io/projected/12cc8e8a-35f7-4d25-89d3-75418ffc2713-kube-api-access-8sz4c\") pod \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.399509 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-utilities\") pod \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\" (UID: \"12cc8e8a-35f7-4d25-89d3-75418ffc2713\") " Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.400225 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-utilities" (OuterVolumeSpecName: "utilities") pod "12cc8e8a-35f7-4d25-89d3-75418ffc2713" (UID: "12cc8e8a-35f7-4d25-89d3-75418ffc2713"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.405612 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12cc8e8a-35f7-4d25-89d3-75418ffc2713-kube-api-access-8sz4c" (OuterVolumeSpecName: "kube-api-access-8sz4c") pod "12cc8e8a-35f7-4d25-89d3-75418ffc2713" (UID: "12cc8e8a-35f7-4d25-89d3-75418ffc2713"). InnerVolumeSpecName "kube-api-access-8sz4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.445564 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12cc8e8a-35f7-4d25-89d3-75418ffc2713" (UID: "12cc8e8a-35f7-4d25-89d3-75418ffc2713"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.501900 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.502567 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12cc8e8a-35f7-4d25-89d3-75418ffc2713-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.502622 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sz4c\" (UniqueName: \"kubernetes.io/projected/12cc8e8a-35f7-4d25-89d3-75418ffc2713-kube-api-access-8sz4c\") on node \"crc\" DevicePath \"\"" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.824955 4780 generic.go:334] "Generic (PLEG): container finished" podID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerID="92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079" exitCode=0 Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.825007 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h8wfj" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.825011 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8wfj" event={"ID":"12cc8e8a-35f7-4d25-89d3-75418ffc2713","Type":"ContainerDied","Data":"92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079"} Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.825042 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h8wfj" event={"ID":"12cc8e8a-35f7-4d25-89d3-75418ffc2713","Type":"ContainerDied","Data":"2498dac4cc7b442136c9c703e897f4acb8987e7ff1668c147fefd295872c9e2e"} Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.825063 4780 scope.go:117] "RemoveContainer" containerID="92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.855489 4780 scope.go:117] "RemoveContainer" containerID="a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.891011 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h8wfj"] Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.895767 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h8wfj"] Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.907846 4780 scope.go:117] "RemoveContainer" containerID="0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.956609 4780 scope.go:117] "RemoveContainer" containerID="92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079" Dec 05 09:08:18 crc kubenswrapper[4780]: E1205 09:08:18.957525 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079\": container with ID starting with 92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079 not found: ID does not exist" containerID="92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.957570 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079"} err="failed to get container status \"92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079\": rpc error: code = NotFound desc = could not find container \"92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079\": container with ID starting with 92b8abb957defc15a08edfef84ea3dd641763ebb2b99801214462fbc1d927079 not found: ID does not exist" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.957597 4780 scope.go:117] "RemoveContainer" containerID="a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf" Dec 05 09:08:18 crc kubenswrapper[4780]: E1205 09:08:18.958187 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf\": container with ID starting with a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf not found: ID does not exist" containerID="a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.958233 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf"} err="failed to get container status \"a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf\": rpc error: code = NotFound desc = could not find container \"a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf\": container with ID starting with a453e9d6b8851bb4384bdfa4de963476ef377a5e9b4e24ef6162afefa92ed4bf not found: ID does not exist" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.958260 4780 scope.go:117] "RemoveContainer" containerID="0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa" Dec 05 09:08:18 crc kubenswrapper[4780]: E1205 09:08:18.958570 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa\": container with ID starting with 0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa not found: ID does not exist" containerID="0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa" Dec 05 09:08:18 crc kubenswrapper[4780]: I1205 09:08:18.958607 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa"} err="failed to get container status \"0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa\": rpc error: code = NotFound desc = could not find container \"0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa\": container with ID starting with 0d876b7989f5286c6f6b10b1937590867db0272ccac222fced9e005045294caa not found: ID does not exist" Dec 05 09:08:19 crc kubenswrapper[4780]: I1205 09:08:19.552459 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jqst"] Dec 05 09:08:19 crc kubenswrapper[4780]: I1205 09:08:19.937467 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lngjb"] Dec 05 09:08:19 crc kubenswrapper[4780]: I1205 09:08:19.937758 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lngjb" podUID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerName="registry-server" containerID="cri-o://98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0" gracePeriod=2 Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.153738 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" path="/var/lib/kubelet/pods/12cc8e8a-35f7-4d25-89d3-75418ffc2713/volumes" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.503258 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lngjb" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.653264 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-catalog-content\") pod \"184cf114-854b-4bb9-9ff3-35aa1715027a\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.653649 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-utilities\") pod \"184cf114-854b-4bb9-9ff3-35aa1715027a\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.653694 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggzzf\" (UniqueName: \"kubernetes.io/projected/184cf114-854b-4bb9-9ff3-35aa1715027a-kube-api-access-ggzzf\") pod \"184cf114-854b-4bb9-9ff3-35aa1715027a\" (UID: \"184cf114-854b-4bb9-9ff3-35aa1715027a\") " Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.654416 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-utilities" (OuterVolumeSpecName: "utilities") pod "184cf114-854b-4bb9-9ff3-35aa1715027a" (UID: "184cf114-854b-4bb9-9ff3-35aa1715027a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.692402 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184cf114-854b-4bb9-9ff3-35aa1715027a-kube-api-access-ggzzf" (OuterVolumeSpecName: "kube-api-access-ggzzf") pod "184cf114-854b-4bb9-9ff3-35aa1715027a" (UID: "184cf114-854b-4bb9-9ff3-35aa1715027a"). InnerVolumeSpecName "kube-api-access-ggzzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.718057 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "184cf114-854b-4bb9-9ff3-35aa1715027a" (UID: "184cf114-854b-4bb9-9ff3-35aa1715027a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.756778 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.756837 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggzzf\" (UniqueName: \"kubernetes.io/projected/184cf114-854b-4bb9-9ff3-35aa1715027a-kube-api-access-ggzzf\") on node \"crc\" DevicePath \"\"" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.756851 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184cf114-854b-4bb9-9ff3-35aa1715027a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.846092 4780 generic.go:334] "Generic (PLEG): container finished" podID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerID="98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0" exitCode=0 Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.846140 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lngjb" event={"ID":"184cf114-854b-4bb9-9ff3-35aa1715027a","Type":"ContainerDied","Data":"98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0"} Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.846168 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lngjb" event={"ID":"184cf114-854b-4bb9-9ff3-35aa1715027a","Type":"ContainerDied","Data":"e3afebb57172d1081bbd07418410c7fda868b99429f9f7eeb276af09bd1263f2"} Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.846185 4780 scope.go:117] "RemoveContainer" containerID="98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.846185 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lngjb" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.872730 4780 scope.go:117] "RemoveContainer" containerID="1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.885044 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lngjb"] Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.895387 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lngjb"] Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.932958 4780 scope.go:117] "RemoveContainer" containerID="28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.982486 4780 scope.go:117] "RemoveContainer" containerID="98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0" Dec 05 09:08:20 crc kubenswrapper[4780]: E1205 09:08:20.985460 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0\": container with ID starting with 98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0 not found: ID does not exist" containerID="98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.985651 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0"} err="failed to get container status \"98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0\": rpc error: code = NotFound desc = could not find container \"98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0\": container with ID starting with 98b6656ad52d4a68c05ba5c35157d32def22dfa132b47540ea8a17de958f62e0 not found: ID does not exist" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.985862 4780 scope.go:117] "RemoveContainer" containerID="1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042" Dec 05 09:08:20 crc kubenswrapper[4780]: E1205 09:08:20.990007 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042\": container with ID starting with 1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042 not found: ID does not exist" containerID="1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.990048 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042"} err="failed to get container status \"1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042\": rpc error: code = NotFound desc = could not find container \"1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042\": container with ID starting with 1ef131684856c5eeb38ac70ff6dba90fe9b95b2208d1b8a12a6c53f88c1b8042 not found: ID does not exist" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.990077 4780 scope.go:117] "RemoveContainer" containerID="28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944" Dec 05 09:08:20 crc kubenswrapper[4780]: E1205 09:08:20.992027 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944\": container with ID starting with 28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944 not found: ID does not exist" containerID="28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944" Dec 05 09:08:20 crc kubenswrapper[4780]: I1205 09:08:20.992063 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944"} err="failed to get container status \"28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944\": rpc error: code = NotFound desc = could not find container \"28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944\": container with ID starting with 28e998b977994b02e757f485eff5e1d61214787a61362552795baf19466d2944 not found: ID does not exist" Dec 05 09:08:22 crc kubenswrapper[4780]: I1205 09:08:22.154157 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="184cf114-854b-4bb9-9ff3-35aa1715027a" path="/var/lib/kubelet/pods/184cf114-854b-4bb9-9ff3-35aa1715027a/volumes" Dec 05 09:09:27 crc kubenswrapper[4780]: I1205 09:09:27.158092 4780 generic.go:334] "Generic (PLEG): container finished" podID="63f6d5db-07fa-40e2-9efa-b29f78bbfe3b" containerID="1adc3142b33fc9d4d4cd617362f087004df729a1794742c2dae5346b161014a6" exitCode=0 Dec 05 09:09:27 crc kubenswrapper[4780]: I1205 09:09:27.158174 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" event={"ID":"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b","Type":"ContainerDied","Data":"1adc3142b33fc9d4d4cd617362f087004df729a1794742c2dae5346b161014a6"} Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.561172 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.631439 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-inventory\") pod \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.631569 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-ssh-key\") pod \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.631630 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-combined-ca-bundle\") pod \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.631700 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65xhm\" (UniqueName: \"kubernetes.io/projected/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-kube-api-access-65xhm\") pod \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.631744 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-agent-neutron-config-0\") pod \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\" (UID: \"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b\") " Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.637943 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-kube-api-access-65xhm" (OuterVolumeSpecName: "kube-api-access-65xhm") pod "63f6d5db-07fa-40e2-9efa-b29f78bbfe3b" (UID: "63f6d5db-07fa-40e2-9efa-b29f78bbfe3b"). InnerVolumeSpecName "kube-api-access-65xhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.638048 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "63f6d5db-07fa-40e2-9efa-b29f78bbfe3b" (UID: "63f6d5db-07fa-40e2-9efa-b29f78bbfe3b"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.660590 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "63f6d5db-07fa-40e2-9efa-b29f78bbfe3b" (UID: "63f6d5db-07fa-40e2-9efa-b29f78bbfe3b"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.663314 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-inventory" (OuterVolumeSpecName: "inventory") pod "63f6d5db-07fa-40e2-9efa-b29f78bbfe3b" (UID: "63f6d5db-07fa-40e2-9efa-b29f78bbfe3b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.669719 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "63f6d5db-07fa-40e2-9efa-b29f78bbfe3b" (UID: "63f6d5db-07fa-40e2-9efa-b29f78bbfe3b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.734873 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.734928 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.734938 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.734950 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65xhm\" (UniqueName: \"kubernetes.io/projected/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-kube-api-access-65xhm\") on node \"crc\" DevicePath \"\"" Dec 05 09:09:28 crc kubenswrapper[4780]: I1205 09:09:28.734960 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/63f6d5db-07fa-40e2-9efa-b29f78bbfe3b-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.174410 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" event={"ID":"63f6d5db-07fa-40e2-9efa-b29f78bbfe3b","Type":"ContainerDied","Data":"d3db0d57b1b530e21d48737fab272e10295890def7973fdacd3af2b61c81552a"} Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.174453 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3db0d57b1b530e21d48737fab272e10295890def7973fdacd3af2b61c81552a" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.174477 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-mghjq" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.263282 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf"] Dec 05 09:09:29 crc kubenswrapper[4780]: E1205 09:09:29.263821 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerName="extract-utilities" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.263845 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerName="extract-utilities" Dec 05 09:09:29 crc kubenswrapper[4780]: E1205 09:09:29.263868 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerName="registry-server" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.263960 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerName="registry-server" Dec 05 09:09:29 crc kubenswrapper[4780]: E1205 09:09:29.263984 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerName="extract-content" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.263993 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerName="extract-content" Dec 05 09:09:29 crc kubenswrapper[4780]: E1205 09:09:29.264013 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerName="extract-content" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.264021 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerName="extract-content" Dec 05 09:09:29 crc kubenswrapper[4780]: E1205 09:09:29.264045 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerName="extract-utilities" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.264053 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerName="extract-utilities" Dec 05 09:09:29 crc kubenswrapper[4780]: E1205 09:09:29.264079 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f6d5db-07fa-40e2-9efa-b29f78bbfe3b" containerName="neutron-sriov-openstack-openstack-cell1" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.264089 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f6d5db-07fa-40e2-9efa-b29f78bbfe3b" containerName="neutron-sriov-openstack-openstack-cell1" Dec 05 09:09:29 crc kubenswrapper[4780]: E1205 09:09:29.264105 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerName="registry-server" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.264115 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerName="registry-server" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.264356 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f6d5db-07fa-40e2-9efa-b29f78bbfe3b" containerName="neutron-sriov-openstack-openstack-cell1" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.264374 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="184cf114-854b-4bb9-9ff3-35aa1715027a" containerName="registry-server" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.264413 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="12cc8e8a-35f7-4d25-89d3-75418ffc2713" containerName="registry-server" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.265362 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.268570 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.268570 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.268743 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.271693 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.271820 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.281168 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf"] Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.345605 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.345740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.345846 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.345983 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmzwf\" (UniqueName: \"kubernetes.io/projected/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-kube-api-access-lmzwf\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.346047 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.448133 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.448524 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.448562 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmzwf\" (UniqueName: \"kubernetes.io/projected/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-kube-api-access-lmzwf\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.448589 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.448635 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.454198 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.455675 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.458112 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.459256 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.471518 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmzwf\" (UniqueName: \"kubernetes.io/projected/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-kube-api-access-lmzwf\") pod \"neutron-dhcp-openstack-openstack-cell1-r2mdf\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.591013 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.907818 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:09:29 crc kubenswrapper[4780]: I1205 09:09:29.908232 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:09:30 crc kubenswrapper[4780]: I1205 09:09:30.177754 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf"] Dec 05 09:09:30 crc kubenswrapper[4780]: W1205 09:09:30.180779 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ea9a8f2_904c_45b8_9e1a_72a0780a003a.slice/crio-3b7232ca16a38321e5246d5449bfaa97b9995e811decc55a4061e9edf87a4381 WatchSource:0}: Error finding container 3b7232ca16a38321e5246d5449bfaa97b9995e811decc55a4061e9edf87a4381: Status 404 returned error can't find the container with id 3b7232ca16a38321e5246d5449bfaa97b9995e811decc55a4061e9edf87a4381 Dec 05 09:09:31 crc kubenswrapper[4780]: I1205 09:09:31.199476 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" event={"ID":"4ea9a8f2-904c-45b8-9e1a-72a0780a003a","Type":"ContainerStarted","Data":"0c7863fba653dea07cc256fc4ef1dd470299c61651f294678577ae6c3a83d0b5"} Dec 05 09:09:31 crc kubenswrapper[4780]: I1205 09:09:31.200108 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" event={"ID":"4ea9a8f2-904c-45b8-9e1a-72a0780a003a","Type":"ContainerStarted","Data":"3b7232ca16a38321e5246d5449bfaa97b9995e811decc55a4061e9edf87a4381"} Dec 05 09:09:31 crc kubenswrapper[4780]: I1205 09:09:31.224294 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" podStartSLOduration=1.7863896910000001 podStartE2EDuration="2.224271803s" podCreationTimestamp="2025-12-05 09:09:29 +0000 UTC" firstStartedPulling="2025-12-05 09:09:30.183224432 +0000 UTC m=+8604.252740764" lastFinishedPulling="2025-12-05 09:09:30.621106544 +0000 UTC m=+8604.690622876" observedRunningTime="2025-12-05 09:09:31.216336039 +0000 UTC m=+8605.285852361" watchObservedRunningTime="2025-12-05 09:09:31.224271803 +0000 UTC m=+8605.293788135" Dec 05 09:09:59 crc kubenswrapper[4780]: I1205 09:09:59.907711 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:09:59 crc kubenswrapper[4780]: I1205 09:09:59.908202 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:10:29 crc kubenswrapper[4780]: I1205 09:10:29.908116 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:10:29 crc kubenswrapper[4780]: I1205 09:10:29.908664 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:10:29 crc kubenswrapper[4780]: I1205 09:10:29.908727 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 09:10:29 crc kubenswrapper[4780]: I1205 09:10:29.909529 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f91664f6ca8c45a9b3a807e5d3fe4efbd331835e0125c31ee473500936408cc"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 09:10:29 crc kubenswrapper[4780]: I1205 09:10:29.909582 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://5f91664f6ca8c45a9b3a807e5d3fe4efbd331835e0125c31ee473500936408cc" gracePeriod=600 Dec 05 09:10:30 crc kubenswrapper[4780]: I1205 09:10:30.758596 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="5f91664f6ca8c45a9b3a807e5d3fe4efbd331835e0125c31ee473500936408cc" exitCode=0 Dec 05 09:10:30 crc kubenswrapper[4780]: I1205 09:10:30.758680 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"5f91664f6ca8c45a9b3a807e5d3fe4efbd331835e0125c31ee473500936408cc"} Dec 05 09:10:30 crc kubenswrapper[4780]: I1205 09:10:30.758995 4780 scope.go:117] "RemoveContainer" containerID="fe3133fbb70acdc66faceab4582b9e1efaa648a4ad9e22b6daa338601ee6e7f1" Dec 05 09:10:31 crc kubenswrapper[4780]: I1205 09:10:31.771330 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847"} Dec 05 09:12:47 crc kubenswrapper[4780]: I1205 09:12:47.906566 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p6bgr"] Dec 05 09:12:47 crc kubenswrapper[4780]: I1205 09:12:47.909210 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:47 crc kubenswrapper[4780]: I1205 09:12:47.968360 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6bgr"] Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.069720 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxjw5\" (UniqueName: \"kubernetes.io/projected/0a6ca32d-5f00-4b00-ba6d-78962d34d092-kube-api-access-rxjw5\") pod \"redhat-operators-p6bgr\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.069797 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-utilities\") pod \"redhat-operators-p6bgr\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.070230 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-catalog-content\") pod \"redhat-operators-p6bgr\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.172658 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxjw5\" (UniqueName: \"kubernetes.io/projected/0a6ca32d-5f00-4b00-ba6d-78962d34d092-kube-api-access-rxjw5\") pod \"redhat-operators-p6bgr\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.172761 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-utilities\") pod \"redhat-operators-p6bgr\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.173065 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-catalog-content\") pod \"redhat-operators-p6bgr\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.173630 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-utilities\") pod \"redhat-operators-p6bgr\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.173643 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-catalog-content\") pod \"redhat-operators-p6bgr\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.192271 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxjw5\" (UniqueName: \"kubernetes.io/projected/0a6ca32d-5f00-4b00-ba6d-78962d34d092-kube-api-access-rxjw5\") pod \"redhat-operators-p6bgr\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.234949 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:48 crc kubenswrapper[4780]: I1205 09:12:48.738586 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6bgr"] Dec 05 09:12:49 crc kubenswrapper[4780]: I1205 09:12:49.012221 4780 generic.go:334] "Generic (PLEG): container finished" podID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerID="9c2f84782d0f4753e50ca2e337e01de481ee575e4da74ddb2e07608d2d6ad684" exitCode=0 Dec 05 09:12:49 crc kubenswrapper[4780]: I1205 09:12:49.012351 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6bgr" event={"ID":"0a6ca32d-5f00-4b00-ba6d-78962d34d092","Type":"ContainerDied","Data":"9c2f84782d0f4753e50ca2e337e01de481ee575e4da74ddb2e07608d2d6ad684"} Dec 05 09:12:49 crc kubenswrapper[4780]: I1205 09:12:49.012570 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6bgr" event={"ID":"0a6ca32d-5f00-4b00-ba6d-78962d34d092","Type":"ContainerStarted","Data":"3e2e358e7e02b0fe3cc248827593a3f14f777609c91994868933b3397f1fc5e8"} Dec 05 09:12:49 crc kubenswrapper[4780]: I1205 09:12:49.015613 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 09:12:51 crc kubenswrapper[4780]: I1205 09:12:51.031709 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6bgr" event={"ID":"0a6ca32d-5f00-4b00-ba6d-78962d34d092","Type":"ContainerStarted","Data":"57bfdeaa072953d4cc947eb2ce6d9a3094cd8d1c3bbedbb153d36c3abd705bc3"} Dec 05 09:12:54 crc kubenswrapper[4780]: I1205 09:12:54.065135 4780 generic.go:334] "Generic (PLEG): container finished" podID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerID="57bfdeaa072953d4cc947eb2ce6d9a3094cd8d1c3bbedbb153d36c3abd705bc3" exitCode=0 Dec 05 09:12:54 crc kubenswrapper[4780]: I1205 09:12:54.065212 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6bgr" event={"ID":"0a6ca32d-5f00-4b00-ba6d-78962d34d092","Type":"ContainerDied","Data":"57bfdeaa072953d4cc947eb2ce6d9a3094cd8d1c3bbedbb153d36c3abd705bc3"} Dec 05 09:12:56 crc kubenswrapper[4780]: I1205 09:12:56.083788 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6bgr" event={"ID":"0a6ca32d-5f00-4b00-ba6d-78962d34d092","Type":"ContainerStarted","Data":"7c68aa60f6b2533bde626d08711f18e45bca3321993b404476462d2071b3e605"} Dec 05 09:12:56 crc kubenswrapper[4780]: I1205 09:12:56.101498 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p6bgr" podStartSLOduration=2.861612562 podStartE2EDuration="9.101481239s" podCreationTimestamp="2025-12-05 09:12:47 +0000 UTC" firstStartedPulling="2025-12-05 09:12:49.015389904 +0000 UTC m=+8803.084906236" lastFinishedPulling="2025-12-05 09:12:55.255258581 +0000 UTC m=+8809.324774913" observedRunningTime="2025-12-05 09:12:56.099680021 +0000 UTC m=+8810.169196353" watchObservedRunningTime="2025-12-05 09:12:56.101481239 +0000 UTC m=+8810.170997571" Dec 05 09:12:58 crc kubenswrapper[4780]: I1205 09:12:58.235648 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:58 crc kubenswrapper[4780]: I1205 09:12:58.237104 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:12:59 crc kubenswrapper[4780]: I1205 09:12:59.286306 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p6bgr" podUID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerName="registry-server" probeResult="failure" output=< Dec 05 09:12:59 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Dec 05 09:12:59 crc kubenswrapper[4780]: > Dec 05 09:12:59 crc kubenswrapper[4780]: I1205 09:12:59.907818 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:12:59 crc kubenswrapper[4780]: I1205 09:12:59.907902 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:13:08 crc kubenswrapper[4780]: I1205 09:13:08.284200 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:13:08 crc kubenswrapper[4780]: I1205 09:13:08.332069 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:13:08 crc kubenswrapper[4780]: I1205 09:13:08.521103 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6bgr"] Dec 05 09:13:10 crc kubenswrapper[4780]: I1205 09:13:10.204687 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p6bgr" podUID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerName="registry-server" containerID="cri-o://7c68aa60f6b2533bde626d08711f18e45bca3321993b404476462d2071b3e605" gracePeriod=2 Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.217137 4780 generic.go:334] "Generic (PLEG): container finished" podID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerID="7c68aa60f6b2533bde626d08711f18e45bca3321993b404476462d2071b3e605" exitCode=0 Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.217174 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6bgr" event={"ID":"0a6ca32d-5f00-4b00-ba6d-78962d34d092","Type":"ContainerDied","Data":"7c68aa60f6b2533bde626d08711f18e45bca3321993b404476462d2071b3e605"} Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.217740 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6bgr" event={"ID":"0a6ca32d-5f00-4b00-ba6d-78962d34d092","Type":"ContainerDied","Data":"3e2e358e7e02b0fe3cc248827593a3f14f777609c91994868933b3397f1fc5e8"} Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.217764 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e2e358e7e02b0fe3cc248827593a3f14f777609c91994868933b3397f1fc5e8" Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.227815 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.253066 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxjw5\" (UniqueName: \"kubernetes.io/projected/0a6ca32d-5f00-4b00-ba6d-78962d34d092-kube-api-access-rxjw5\") pod \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.253310 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-utilities\") pod \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.253379 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-catalog-content\") pod \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\" (UID: \"0a6ca32d-5f00-4b00-ba6d-78962d34d092\") " Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.254202 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-utilities" (OuterVolumeSpecName: "utilities") pod "0a6ca32d-5f00-4b00-ba6d-78962d34d092" (UID: "0a6ca32d-5f00-4b00-ba6d-78962d34d092"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.254311 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.261246 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a6ca32d-5f00-4b00-ba6d-78962d34d092-kube-api-access-rxjw5" (OuterVolumeSpecName: "kube-api-access-rxjw5") pod "0a6ca32d-5f00-4b00-ba6d-78962d34d092" (UID: "0a6ca32d-5f00-4b00-ba6d-78962d34d092"). InnerVolumeSpecName "kube-api-access-rxjw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.356630 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxjw5\" (UniqueName: \"kubernetes.io/projected/0a6ca32d-5f00-4b00-ba6d-78962d34d092-kube-api-access-rxjw5\") on node \"crc\" DevicePath \"\"" Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.361056 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a6ca32d-5f00-4b00-ba6d-78962d34d092" (UID: "0a6ca32d-5f00-4b00-ba6d-78962d34d092"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:13:11 crc kubenswrapper[4780]: I1205 09:13:11.458123 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a6ca32d-5f00-4b00-ba6d-78962d34d092-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:13:12 crc kubenswrapper[4780]: I1205 09:13:12.225465 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6bgr" Dec 05 09:13:12 crc kubenswrapper[4780]: I1205 09:13:12.249686 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p6bgr"] Dec 05 09:13:12 crc kubenswrapper[4780]: I1205 09:13:12.260793 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p6bgr"] Dec 05 09:13:14 crc kubenswrapper[4780]: I1205 09:13:14.162244 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" path="/var/lib/kubelet/pods/0a6ca32d-5f00-4b00-ba6d-78962d34d092/volumes" Dec 05 09:13:29 crc kubenswrapper[4780]: I1205 09:13:29.908329 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:13:29 crc kubenswrapper[4780]: I1205 09:13:29.908983 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:13:55 crc kubenswrapper[4780]: I1205 09:13:55.609122 4780 generic.go:334] "Generic (PLEG): container finished" podID="4ea9a8f2-904c-45b8-9e1a-72a0780a003a" containerID="0c7863fba653dea07cc256fc4ef1dd470299c61651f294678577ae6c3a83d0b5" exitCode=0 Dec 05 09:13:55 crc kubenswrapper[4780]: I1205 09:13:55.609207 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" event={"ID":"4ea9a8f2-904c-45b8-9e1a-72a0780a003a","Type":"ContainerDied","Data":"0c7863fba653dea07cc256fc4ef1dd470299c61651f294678577ae6c3a83d0b5"} Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.048833 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.087581 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmzwf\" (UniqueName: \"kubernetes.io/projected/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-kube-api-access-lmzwf\") pod \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.087644 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-combined-ca-bundle\") pod \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.087680 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-agent-neutron-config-0\") pod \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.087959 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-ssh-key\") pod \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.088160 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-inventory\") pod \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\" (UID: \"4ea9a8f2-904c-45b8-9e1a-72a0780a003a\") " Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.095751 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "4ea9a8f2-904c-45b8-9e1a-72a0780a003a" (UID: "4ea9a8f2-904c-45b8-9e1a-72a0780a003a"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.098365 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-kube-api-access-lmzwf" (OuterVolumeSpecName: "kube-api-access-lmzwf") pod "4ea9a8f2-904c-45b8-9e1a-72a0780a003a" (UID: "4ea9a8f2-904c-45b8-9e1a-72a0780a003a"). InnerVolumeSpecName "kube-api-access-lmzwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.120055 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-inventory" (OuterVolumeSpecName: "inventory") pod "4ea9a8f2-904c-45b8-9e1a-72a0780a003a" (UID: "4ea9a8f2-904c-45b8-9e1a-72a0780a003a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.122988 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4ea9a8f2-904c-45b8-9e1a-72a0780a003a" (UID: "4ea9a8f2-904c-45b8-9e1a-72a0780a003a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.124832 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "4ea9a8f2-904c-45b8-9e1a-72a0780a003a" (UID: "4ea9a8f2-904c-45b8-9e1a-72a0780a003a"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.190922 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.191000 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmzwf\" (UniqueName: \"kubernetes.io/projected/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-kube-api-access-lmzwf\") on node \"crc\" DevicePath \"\"" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.191015 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.191028 4780 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.191042 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ea9a8f2-904c-45b8-9e1a-72a0780a003a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.628283 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" event={"ID":"4ea9a8f2-904c-45b8-9e1a-72a0780a003a","Type":"ContainerDied","Data":"3b7232ca16a38321e5246d5449bfaa97b9995e811decc55a4061e9edf87a4381"} Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.628332 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b7232ca16a38321e5246d5449bfaa97b9995e811decc55a4061e9edf87a4381" Dec 05 09:13:57 crc kubenswrapper[4780]: I1205 09:13:57.628334 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-r2mdf" Dec 05 09:13:59 crc kubenswrapper[4780]: I1205 09:13:59.908281 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:13:59 crc kubenswrapper[4780]: I1205 09:13:59.908670 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:13:59 crc kubenswrapper[4780]: I1205 09:13:59.908725 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 09:13:59 crc kubenswrapper[4780]: I1205 09:13:59.909623 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 09:13:59 crc kubenswrapper[4780]: I1205 09:13:59.909991 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" gracePeriod=600 Dec 05 09:14:00 crc kubenswrapper[4780]: E1205 09:14:00.598988 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:14:00 crc kubenswrapper[4780]: I1205 09:14:00.658207 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" exitCode=0 Dec 05 09:14:00 crc kubenswrapper[4780]: I1205 09:14:00.658274 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847"} Dec 05 09:14:00 crc kubenswrapper[4780]: I1205 09:14:00.658502 4780 scope.go:117] "RemoveContainer" containerID="5f91664f6ca8c45a9b3a807e5d3fe4efbd331835e0125c31ee473500936408cc" Dec 05 09:14:00 crc kubenswrapper[4780]: I1205 09:14:00.658963 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:14:00 crc kubenswrapper[4780]: E1205 09:14:00.659338 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:14:14 crc kubenswrapper[4780]: I1205 09:14:14.138588 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:14:14 crc kubenswrapper[4780]: E1205 09:14:14.139386 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:14:24 crc kubenswrapper[4780]: I1205 09:14:24.775223 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 09:14:24 crc kubenswrapper[4780]: I1205 09:14:24.775837 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="45254a92-70be-48cc-950a-683efef559d5" containerName="nova-cell0-conductor-conductor" containerID="cri-o://5910ccedb52d5ea30c8819939f35f81059ca754293d49206e2afa17af16fa3f9" gracePeriod=30 Dec 05 09:14:24 crc kubenswrapper[4780]: I1205 09:14:24.824515 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 09:14:24 crc kubenswrapper[4780]: I1205 09:14:24.824999 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="506f828e-216f-456c-91c9-ee53f5b4056e" containerName="nova-cell1-conductor-conductor" containerID="cri-o://baa23f6c618b5c56cb965da46d4cc43f3f8b682b8c857f697bed4822ddae3c1d" gracePeriod=30 Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.434701 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.435005 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerName="nova-api-log" containerID="cri-o://3fc22ff04d7899de939f4613a4322bfc38236dc65241afca65beaad0c6d22f57" gracePeriod=30 Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.435140 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerName="nova-api-api" containerID="cri-o://855c8dc83e0f8ffa8ce1d99240f11d41a23fae233257c693de1ce81bec912dad" gracePeriod=30 Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.506943 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.507177 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="99fc83fe-5001-446e-aeca-106c7a5fd5ed" containerName="nova-scheduler-scheduler" containerID="cri-o://3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490" gracePeriod=30 Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.531780 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.532103 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-log" containerID="cri-o://2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332" gracePeriod=30 Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.532176 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-metadata" containerID="cri-o://607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120" gracePeriod=30 Dec 05 09:14:25 crc kubenswrapper[4780]: E1205 09:14:25.587471 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="baa23f6c618b5c56cb965da46d4cc43f3f8b682b8c857f697bed4822ddae3c1d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 09:14:25 crc kubenswrapper[4780]: E1205 09:14:25.590050 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="baa23f6c618b5c56cb965da46d4cc43f3f8b682b8c857f697bed4822ddae3c1d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 09:14:25 crc kubenswrapper[4780]: E1205 09:14:25.594294 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="baa23f6c618b5c56cb965da46d4cc43f3f8b682b8c857f697bed4822ddae3c1d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 09:14:25 crc kubenswrapper[4780]: E1205 09:14:25.594382 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="506f828e-216f-456c-91c9-ee53f5b4056e" containerName="nova-cell1-conductor-conductor" Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.909277 4780 generic.go:334] "Generic (PLEG): container finished" podID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerID="2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332" exitCode=143 Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.909349 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75b7f6c3-470e-43ba-98c7-d474cd9f65b5","Type":"ContainerDied","Data":"2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332"} Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.911724 4780 generic.go:334] "Generic (PLEG): container finished" podID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerID="3fc22ff04d7899de939f4613a4322bfc38236dc65241afca65beaad0c6d22f57" exitCode=143 Dec 05 09:14:25 crc kubenswrapper[4780]: I1205 09:14:25.911759 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6021f8a7-e206-46aa-8faf-3383d3594e72","Type":"ContainerDied","Data":"3fc22ff04d7899de939f4613a4322bfc38236dc65241afca65beaad0c6d22f57"} Dec 05 09:14:26 crc kubenswrapper[4780]: E1205 09:14:26.197735 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5910ccedb52d5ea30c8819939f35f81059ca754293d49206e2afa17af16fa3f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 09:14:26 crc kubenswrapper[4780]: E1205 09:14:26.199459 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5910ccedb52d5ea30c8819939f35f81059ca754293d49206e2afa17af16fa3f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 09:14:26 crc kubenswrapper[4780]: E1205 09:14:26.200755 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5910ccedb52d5ea30c8819939f35f81059ca754293d49206e2afa17af16fa3f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 05 09:14:26 crc kubenswrapper[4780]: E1205 09:14:26.200816 4780 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="45254a92-70be-48cc-950a-683efef559d5" containerName="nova-cell0-conductor-conductor" Dec 05 09:14:27 crc kubenswrapper[4780]: E1205 09:14:27.567383 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490 is running failed: container process not found" containerID="3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 09:14:27 crc kubenswrapper[4780]: E1205 09:14:27.568089 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490 is running failed: container process not found" containerID="3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 09:14:27 crc kubenswrapper[4780]: E1205 09:14:27.568514 4780 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490 is running failed: container process not found" containerID="3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 09:14:27 crc kubenswrapper[4780]: E1205 09:14:27.568545 4780 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="99fc83fe-5001-446e-aeca-106c7a5fd5ed" containerName="nova-scheduler-scheduler" Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.839742 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.930454 4780 generic.go:334] "Generic (PLEG): container finished" podID="99fc83fe-5001-446e-aeca-106c7a5fd5ed" containerID="3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490" exitCode=0 Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.930497 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.930511 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99fc83fe-5001-446e-aeca-106c7a5fd5ed","Type":"ContainerDied","Data":"3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490"} Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.930549 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99fc83fe-5001-446e-aeca-106c7a5fd5ed","Type":"ContainerDied","Data":"6e7e732a7f6086d6a6b6f192b30c79780eb8b09caa5c74f772ba7da5965eb4d4"} Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.930573 4780 scope.go:117] "RemoveContainer" containerID="3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490" Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.952099 4780 scope.go:117] "RemoveContainer" containerID="3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490" Dec 05 09:14:27 crc kubenswrapper[4780]: E1205 09:14:27.952558 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490\": container with ID starting with 3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490 not found: ID does not exist" containerID="3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490" Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.952599 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490"} err="failed to get container status \"3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490\": rpc error: code = NotFound desc = could not find container \"3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490\": container with ID starting with 3292445c12115f79036015854f0f83ca20f3c609834339855d0af49a66213490 not found: ID does not exist" Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.997584 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-combined-ca-bundle\") pod \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.997702 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-config-data\") pod \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " Dec 05 09:14:27 crc kubenswrapper[4780]: I1205 09:14:27.997950 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhctf\" (UniqueName: \"kubernetes.io/projected/99fc83fe-5001-446e-aeca-106c7a5fd5ed-kube-api-access-rhctf\") pod \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\" (UID: \"99fc83fe-5001-446e-aeca-106c7a5fd5ed\") " Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.465852 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99fc83fe-5001-446e-aeca-106c7a5fd5ed-kube-api-access-rhctf" (OuterVolumeSpecName: "kube-api-access-rhctf") pod "99fc83fe-5001-446e-aeca-106c7a5fd5ed" (UID: "99fc83fe-5001-446e-aeca-106c7a5fd5ed"). InnerVolumeSpecName "kube-api-access-rhctf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.490407 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-config-data" (OuterVolumeSpecName: "config-data") pod "99fc83fe-5001-446e-aeca-106c7a5fd5ed" (UID: "99fc83fe-5001-446e-aeca-106c7a5fd5ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.492756 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99fc83fe-5001-446e-aeca-106c7a5fd5ed" (UID: "99fc83fe-5001-446e-aeca-106c7a5fd5ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.508428 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhctf\" (UniqueName: \"kubernetes.io/projected/99fc83fe-5001-446e-aeca-106c7a5fd5ed-kube-api-access-rhctf\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.508477 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.508487 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99fc83fe-5001-446e-aeca-106c7a5fd5ed-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.840957 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.855350 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.863806 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 09:14:28 crc kubenswrapper[4780]: E1205 09:14:28.864321 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerName="registry-server" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.864342 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerName="registry-server" Dec 05 09:14:28 crc kubenswrapper[4780]: E1205 09:14:28.864372 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea9a8f2-904c-45b8-9e1a-72a0780a003a" containerName="neutron-dhcp-openstack-openstack-cell1" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.864380 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea9a8f2-904c-45b8-9e1a-72a0780a003a" containerName="neutron-dhcp-openstack-openstack-cell1" Dec 05 09:14:28 crc kubenswrapper[4780]: E1205 09:14:28.864406 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerName="extract-utilities" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.864412 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerName="extract-utilities" Dec 05 09:14:28 crc kubenswrapper[4780]: E1205 09:14:28.864423 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerName="extract-content" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.864429 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerName="extract-content" Dec 05 09:14:28 crc kubenswrapper[4780]: E1205 09:14:28.864436 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99fc83fe-5001-446e-aeca-106c7a5fd5ed" containerName="nova-scheduler-scheduler" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.864442 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="99fc83fe-5001-446e-aeca-106c7a5fd5ed" containerName="nova-scheduler-scheduler" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.864682 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea9a8f2-904c-45b8-9e1a-72a0780a003a" containerName="neutron-dhcp-openstack-openstack-cell1" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.864695 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="99fc83fe-5001-446e-aeca-106c7a5fd5ed" containerName="nova-scheduler-scheduler" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.864710 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a6ca32d-5f00-4b00-ba6d-78962d34d092" containerName="registry-server" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.865584 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.867718 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.873482 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.936978 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.89:8775/\": read tcp 10.217.0.2:39336->10.217.1.89:8775: read: connection reset by peer" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.937037 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.89:8775/\": read tcp 10.217.0.2:39338->10.217.1.89:8775: read: connection reset by peer" Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.951979 4780 generic.go:334] "Generic (PLEG): container finished" podID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerID="855c8dc83e0f8ffa8ce1d99240f11d41a23fae233257c693de1ce81bec912dad" exitCode=0 Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.952070 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6021f8a7-e206-46aa-8faf-3383d3594e72","Type":"ContainerDied","Data":"855c8dc83e0f8ffa8ce1d99240f11d41a23fae233257c693de1ce81bec912dad"} Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.957564 4780 generic.go:334] "Generic (PLEG): container finished" podID="506f828e-216f-456c-91c9-ee53f5b4056e" containerID="baa23f6c618b5c56cb965da46d4cc43f3f8b682b8c857f697bed4822ddae3c1d" exitCode=0 Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.957603 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"506f828e-216f-456c-91c9-ee53f5b4056e","Type":"ContainerDied","Data":"baa23f6c618b5c56cb965da46d4cc43f3f8b682b8c857f697bed4822ddae3c1d"} Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.957625 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"506f828e-216f-456c-91c9-ee53f5b4056e","Type":"ContainerDied","Data":"32a758bb539b0d315f33050c46e5b6ce19a76811dee99741f6bae18e2f47b880"} Dec 05 09:14:28 crc kubenswrapper[4780]: I1205 09:14:28.957639 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a758bb539b0d315f33050c46e5b6ce19a76811dee99741f6bae18e2f47b880" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.016919 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0-config-data\") pod \"nova-scheduler-0\" (UID: \"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0\") " pod="openstack/nova-scheduler-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.016956 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wg67\" (UniqueName: \"kubernetes.io/projected/1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0-kube-api-access-7wg67\") pod \"nova-scheduler-0\" (UID: \"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0\") " pod="openstack/nova-scheduler-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.017142 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0\") " pod="openstack/nova-scheduler-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.120293 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0\") " pod="openstack/nova-scheduler-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.120687 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0-config-data\") pod \"nova-scheduler-0\" (UID: \"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0\") " pod="openstack/nova-scheduler-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.120713 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wg67\" (UniqueName: \"kubernetes.io/projected/1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0-kube-api-access-7wg67\") pod \"nova-scheduler-0\" (UID: \"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0\") " pod="openstack/nova-scheduler-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.130160 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0-config-data\") pod \"nova-scheduler-0\" (UID: \"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0\") " pod="openstack/nova-scheduler-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.132314 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0\") " pod="openstack/nova-scheduler-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.139459 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:14:29 crc kubenswrapper[4780]: E1205 09:14:29.140114 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.143495 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wg67\" (UniqueName: \"kubernetes.io/projected/1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0-kube-api-access-7wg67\") pod \"nova-scheduler-0\" (UID: \"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0\") " pod="openstack/nova-scheduler-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.223523 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.250327 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.255285 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.289016 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.325940 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-config-data\") pod \"506f828e-216f-456c-91c9-ee53f5b4056e\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.326089 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-combined-ca-bundle\") pod \"506f828e-216f-456c-91c9-ee53f5b4056e\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.326260 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh88p\" (UniqueName: \"kubernetes.io/projected/506f828e-216f-456c-91c9-ee53f5b4056e-kube-api-access-vh88p\") pod \"506f828e-216f-456c-91c9-ee53f5b4056e\" (UID: \"506f828e-216f-456c-91c9-ee53f5b4056e\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.335515 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506f828e-216f-456c-91c9-ee53f5b4056e-kube-api-access-vh88p" (OuterVolumeSpecName: "kube-api-access-vh88p") pod "506f828e-216f-456c-91c9-ee53f5b4056e" (UID: "506f828e-216f-456c-91c9-ee53f5b4056e"). InnerVolumeSpecName "kube-api-access-vh88p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.356482 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-config-data" (OuterVolumeSpecName: "config-data") pod "506f828e-216f-456c-91c9-ee53f5b4056e" (UID: "506f828e-216f-456c-91c9-ee53f5b4056e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.365047 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "506f828e-216f-456c-91c9-ee53f5b4056e" (UID: "506f828e-216f-456c-91c9-ee53f5b4056e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430246 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-public-tls-certs\") pod \"6021f8a7-e206-46aa-8faf-3383d3594e72\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430299 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-internal-tls-certs\") pod \"6021f8a7-e206-46aa-8faf-3383d3594e72\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430329 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddvn9\" (UniqueName: \"kubernetes.io/projected/6021f8a7-e206-46aa-8faf-3383d3594e72-kube-api-access-ddvn9\") pod \"6021f8a7-e206-46aa-8faf-3383d3594e72\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430358 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-nova-metadata-tls-certs\") pod \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430382 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-config-data\") pod \"6021f8a7-e206-46aa-8faf-3383d3594e72\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430463 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-config-data\") pod \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430514 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-logs\") pod \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430601 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-combined-ca-bundle\") pod \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430692 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8st7\" (UniqueName: \"kubernetes.io/projected/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-kube-api-access-b8st7\") pod \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\" (UID: \"75b7f6c3-470e-43ba-98c7-d474cd9f65b5\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430767 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6021f8a7-e206-46aa-8faf-3383d3594e72-logs\") pod \"6021f8a7-e206-46aa-8faf-3383d3594e72\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.430816 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-combined-ca-bundle\") pod \"6021f8a7-e206-46aa-8faf-3383d3594e72\" (UID: \"6021f8a7-e206-46aa-8faf-3383d3594e72\") " Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.431341 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.431356 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506f828e-216f-456c-91c9-ee53f5b4056e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.431367 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh88p\" (UniqueName: \"kubernetes.io/projected/506f828e-216f-456c-91c9-ee53f5b4056e-kube-api-access-vh88p\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.435047 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6021f8a7-e206-46aa-8faf-3383d3594e72-logs" (OuterVolumeSpecName: "logs") pod "6021f8a7-e206-46aa-8faf-3383d3594e72" (UID: "6021f8a7-e206-46aa-8faf-3383d3594e72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.436445 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-logs" (OuterVolumeSpecName: "logs") pod "75b7f6c3-470e-43ba-98c7-d474cd9f65b5" (UID: "75b7f6c3-470e-43ba-98c7-d474cd9f65b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.458071 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-kube-api-access-b8st7" (OuterVolumeSpecName: "kube-api-access-b8st7") pod "75b7f6c3-470e-43ba-98c7-d474cd9f65b5" (UID: "75b7f6c3-470e-43ba-98c7-d474cd9f65b5"). InnerVolumeSpecName "kube-api-access-b8st7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.463278 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6021f8a7-e206-46aa-8faf-3383d3594e72-kube-api-access-ddvn9" (OuterVolumeSpecName: "kube-api-access-ddvn9") pod "6021f8a7-e206-46aa-8faf-3383d3594e72" (UID: "6021f8a7-e206-46aa-8faf-3383d3594e72"). InnerVolumeSpecName "kube-api-access-ddvn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.533432 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8st7\" (UniqueName: \"kubernetes.io/projected/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-kube-api-access-b8st7\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.533461 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6021f8a7-e206-46aa-8faf-3383d3594e72-logs\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.533472 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddvn9\" (UniqueName: \"kubernetes.io/projected/6021f8a7-e206-46aa-8faf-3383d3594e72-kube-api-access-ddvn9\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.533480 4780 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-logs\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.591054 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75b7f6c3-470e-43ba-98c7-d474cd9f65b5" (UID: "75b7f6c3-470e-43ba-98c7-d474cd9f65b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.632364 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-config-data" (OuterVolumeSpecName: "config-data") pod "75b7f6c3-470e-43ba-98c7-d474cd9f65b5" (UID: "75b7f6c3-470e-43ba-98c7-d474cd9f65b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.633557 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6021f8a7-e206-46aa-8faf-3383d3594e72" (UID: "6021f8a7-e206-46aa-8faf-3383d3594e72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.634992 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.635019 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.635029 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.646153 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-config-data" (OuterVolumeSpecName: "config-data") pod "6021f8a7-e206-46aa-8faf-3383d3594e72" (UID: "6021f8a7-e206-46aa-8faf-3383d3594e72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.737534 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.765262 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6021f8a7-e206-46aa-8faf-3383d3594e72" (UID: "6021f8a7-e206-46aa-8faf-3383d3594e72"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.766024 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "75b7f6c3-470e-43ba-98c7-d474cd9f65b5" (UID: "75b7f6c3-470e-43ba-98c7-d474cd9f65b5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.772587 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6021f8a7-e206-46aa-8faf-3383d3594e72" (UID: "6021f8a7-e206-46aa-8faf-3383d3594e72"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.845468 4780 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.846113 4780 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6021f8a7-e206-46aa-8faf-3383d3594e72-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.846128 4780 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b7f6c3-470e-43ba-98c7-d474cd9f65b5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.968984 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6021f8a7-e206-46aa-8faf-3383d3594e72","Type":"ContainerDied","Data":"e6a4321bf3c053841065adfbea756c3077b6543843c82dd3ba04a04a0b01521b"} Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.969032 4780 scope.go:117] "RemoveContainer" containerID="855c8dc83e0f8ffa8ce1d99240f11d41a23fae233257c693de1ce81bec912dad" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.969280 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.976293 4780 generic.go:334] "Generic (PLEG): container finished" podID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerID="607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120" exitCode=0 Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.976352 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.976386 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75b7f6c3-470e-43ba-98c7-d474cd9f65b5","Type":"ContainerDied","Data":"607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120"} Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.976410 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"75b7f6c3-470e-43ba-98c7-d474cd9f65b5","Type":"ContainerDied","Data":"661cc7582eed76348caa22e705e5379900b5207aa595f2f892383135d59c0360"} Dec 05 09:14:29 crc kubenswrapper[4780]: I1205 09:14:29.976373 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.024161 4780 scope.go:117] "RemoveContainer" containerID="3fc22ff04d7899de939f4613a4322bfc38236dc65241afca65beaad0c6d22f57" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.065751 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.088646 4780 scope.go:117] "RemoveContainer" containerID="607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.098364 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.117448 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.128915 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.139222 4780 scope.go:117] "RemoveContainer" containerID="2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.151392 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506f828e-216f-456c-91c9-ee53f5b4056e" path="/var/lib/kubelet/pods/506f828e-216f-456c-91c9-ee53f5b4056e/volumes" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.152966 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" path="/var/lib/kubelet/pods/6021f8a7-e206-46aa-8faf-3383d3594e72/volumes" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.154647 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99fc83fe-5001-446e-aeca-106c7a5fd5ed" path="/var/lib/kubelet/pods/99fc83fe-5001-446e-aeca-106c7a5fd5ed/volumes" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.155263 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: E1205 09:14:30.155660 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerName="nova-api-log" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.155679 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerName="nova-api-log" Dec 05 09:14:30 crc kubenswrapper[4780]: E1205 09:14:30.155694 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-metadata" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.155700 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-metadata" Dec 05 09:14:30 crc kubenswrapper[4780]: E1205 09:14:30.155719 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerName="nova-api-api" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.155725 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerName="nova-api-api" Dec 05 09:14:30 crc kubenswrapper[4780]: E1205 09:14:30.155738 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506f828e-216f-456c-91c9-ee53f5b4056e" containerName="nova-cell1-conductor-conductor" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.155744 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="506f828e-216f-456c-91c9-ee53f5b4056e" containerName="nova-cell1-conductor-conductor" Dec 05 09:14:30 crc kubenswrapper[4780]: E1205 09:14:30.155765 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-log" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.155771 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-log" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.155996 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerName="nova-api-log" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.156022 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-metadata" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.156032 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="6021f8a7-e206-46aa-8faf-3383d3594e72" containerName="nova-api-api" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.156040 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="506f828e-216f-456c-91c9-ee53f5b4056e" containerName="nova-cell1-conductor-conductor" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.156050 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" containerName="nova-metadata-log" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.156836 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.157349 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.159918 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.161308 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.164290 4780 scope.go:117] "RemoveContainer" containerID="607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120" Dec 05 09:14:30 crc kubenswrapper[4780]: E1205 09:14:30.165735 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120\": container with ID starting with 607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120 not found: ID does not exist" containerID="607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.165772 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120"} err="failed to get container status \"607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120\": rpc error: code = NotFound desc = could not find container \"607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120\": container with ID starting with 607bfdc6a1b57fc46dcc6361cd03bf2dd3acfda9494811bfbc39edb132203120 not found: ID does not exist" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.165824 4780 scope.go:117] "RemoveContainer" containerID="2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332" Dec 05 09:14:30 crc kubenswrapper[4780]: E1205 09:14:30.166209 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332\": container with ID starting with 2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332 not found: ID does not exist" containerID="2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.166243 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332"} err="failed to get container status \"2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332\": rpc error: code = NotFound desc = could not find container \"2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332\": container with ID starting with 2ed0dbd3610621a06fa5b4f569a145bd40ff215a6d645d76b11626621241f332 not found: ID does not exist" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.169504 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.196380 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.203632 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.209817 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.209994 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.210422 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.220138 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.222824 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.226471 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.227151 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.236162 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.248433 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.255537 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c170a703-f3e7-409c-8102-9ad7915e513c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c170a703-f3e7-409c-8102-9ad7915e513c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.255607 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlkl9\" (UniqueName: \"kubernetes.io/projected/c170a703-f3e7-409c-8102-9ad7915e513c-kube-api-access-wlkl9\") pod \"nova-cell1-conductor-0\" (UID: \"c170a703-f3e7-409c-8102-9ad7915e513c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.255649 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c170a703-f3e7-409c-8102-9ad7915e513c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c170a703-f3e7-409c-8102-9ad7915e513c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.273239 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357436 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4rdt\" (UniqueName: \"kubernetes.io/projected/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-kube-api-access-z4rdt\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357509 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c170a703-f3e7-409c-8102-9ad7915e513c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c170a703-f3e7-409c-8102-9ad7915e513c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357550 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qvpr\" (UniqueName: \"kubernetes.io/projected/313f56bf-7b78-4af8-bdce-b386cba8dfcb-kube-api-access-4qvpr\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357591 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlkl9\" (UniqueName: \"kubernetes.io/projected/c170a703-f3e7-409c-8102-9ad7915e513c-kube-api-access-wlkl9\") pod \"nova-cell1-conductor-0\" (UID: \"c170a703-f3e7-409c-8102-9ad7915e513c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357613 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357641 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357670 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313f56bf-7b78-4af8-bdce-b386cba8dfcb-logs\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357711 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357732 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c170a703-f3e7-409c-8102-9ad7915e513c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c170a703-f3e7-409c-8102-9ad7915e513c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357765 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-config-data\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.357813 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-public-tls-certs\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.358268 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.358631 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-config-data\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.358736 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-logs\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461353 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4rdt\" (UniqueName: \"kubernetes.io/projected/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-kube-api-access-z4rdt\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461449 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qvpr\" (UniqueName: \"kubernetes.io/projected/313f56bf-7b78-4af8-bdce-b386cba8dfcb-kube-api-access-4qvpr\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461494 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461523 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461555 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313f56bf-7b78-4af8-bdce-b386cba8dfcb-logs\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461587 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461631 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-config-data\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461693 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-public-tls-certs\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461731 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461835 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-config-data\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.461858 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-logs\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.462386 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-logs\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.462390 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313f56bf-7b78-4af8-bdce-b386cba8dfcb-logs\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.666406 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c170a703-f3e7-409c-8102-9ad7915e513c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c170a703-f3e7-409c-8102-9ad7915e513c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.666617 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c170a703-f3e7-409c-8102-9ad7915e513c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c170a703-f3e7-409c-8102-9ad7915e513c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.668101 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.668107 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.668108 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlkl9\" (UniqueName: \"kubernetes.io/projected/c170a703-f3e7-409c-8102-9ad7915e513c-kube-api-access-wlkl9\") pod \"nova-cell1-conductor-0\" (UID: \"c170a703-f3e7-409c-8102-9ad7915e513c\") " pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.668276 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.668398 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-config-data\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.668404 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-config-data\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.668617 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.669660 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/313f56bf-7b78-4af8-bdce-b386cba8dfcb-public-tls-certs\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.670337 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qvpr\" (UniqueName: \"kubernetes.io/projected/313f56bf-7b78-4af8-bdce-b386cba8dfcb-kube-api-access-4qvpr\") pod \"nova-api-0\" (UID: \"313f56bf-7b78-4af8-bdce-b386cba8dfcb\") " pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.671152 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4rdt\" (UniqueName: \"kubernetes.io/projected/286873d6-e3e1-4a17-b5b1-1697e5bcc61e-kube-api-access-z4rdt\") pod \"nova-metadata-0\" (UID: \"286873d6-e3e1-4a17-b5b1-1697e5bcc61e\") " pod="openstack/nova-metadata-0" Dec 05 09:14:30 crc kubenswrapper[4780]: W1205 09:14:30.677918 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b7813ed_7cb9_4c31_a3fd_edce76fdb1e0.slice/crio-651d50d2567d6da0abbfb03531de873587733c067cd39b1fb6a33b72c7bddb6d WatchSource:0}: Error finding container 651d50d2567d6da0abbfb03531de873587733c067cd39b1fb6a33b72c7bddb6d: Status 404 returned error can't find the container with id 651d50d2567d6da0abbfb03531de873587733c067cd39b1fb6a33b72c7bddb6d Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.791514 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.828108 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 09:14:30 crc kubenswrapper[4780]: I1205 09:14:30.845818 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.013325 4780 generic.go:334] "Generic (PLEG): container finished" podID="45254a92-70be-48cc-950a-683efef559d5" containerID="5910ccedb52d5ea30c8819939f35f81059ca754293d49206e2afa17af16fa3f9" exitCode=0 Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.013401 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"45254a92-70be-48cc-950a-683efef559d5","Type":"ContainerDied","Data":"5910ccedb52d5ea30c8819939f35f81059ca754293d49206e2afa17af16fa3f9"} Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.015448 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0","Type":"ContainerStarted","Data":"651d50d2567d6da0abbfb03531de873587733c067cd39b1fb6a33b72c7bddb6d"} Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.191834 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.283321 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxjps\" (UniqueName: \"kubernetes.io/projected/45254a92-70be-48cc-950a-683efef559d5-kube-api-access-xxjps\") pod \"45254a92-70be-48cc-950a-683efef559d5\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.283519 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-combined-ca-bundle\") pod \"45254a92-70be-48cc-950a-683efef559d5\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.283848 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-config-data\") pod \"45254a92-70be-48cc-950a-683efef559d5\" (UID: \"45254a92-70be-48cc-950a-683efef559d5\") " Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.371086 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45254a92-70be-48cc-950a-683efef559d5-kube-api-access-xxjps" (OuterVolumeSpecName: "kube-api-access-xxjps") pod "45254a92-70be-48cc-950a-683efef559d5" (UID: "45254a92-70be-48cc-950a-683efef559d5"). InnerVolumeSpecName "kube-api-access-xxjps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.378498 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-config-data" (OuterVolumeSpecName: "config-data") pod "45254a92-70be-48cc-950a-683efef559d5" (UID: "45254a92-70be-48cc-950a-683efef559d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.387187 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45254a92-70be-48cc-950a-683efef559d5" (UID: "45254a92-70be-48cc-950a-683efef559d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.390382 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.390441 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxjps\" (UniqueName: \"kubernetes.io/projected/45254a92-70be-48cc-950a-683efef559d5-kube-api-access-xxjps\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.390465 4780 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45254a92-70be-48cc-950a-683efef559d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.441453 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.603390 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 09:14:31 crc kubenswrapper[4780]: I1205 09:14:31.706601 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 09:14:31 crc kubenswrapper[4780]: W1205 09:14:31.712009 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod313f56bf_7b78_4af8_bdce_b386cba8dfcb.slice/crio-833b9e9f85f96c1ff3787d26a2810f948e07abc2c6c2f378ab37f9991a35eea4 WatchSource:0}: Error finding container 833b9e9f85f96c1ff3787d26a2810f948e07abc2c6c2f378ab37f9991a35eea4: Status 404 returned error can't find the container with id 833b9e9f85f96c1ff3787d26a2810f948e07abc2c6c2f378ab37f9991a35eea4 Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.041266 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"286873d6-e3e1-4a17-b5b1-1697e5bcc61e","Type":"ContainerStarted","Data":"72310b64820824ee6dd76769fb95bab3a5aab3d8bb9d969858dabdb12d9dc4ed"} Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.041311 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"286873d6-e3e1-4a17-b5b1-1697e5bcc61e","Type":"ContainerStarted","Data":"04e5b593318ae06259486720ac0c0f769022491838854676433f6abbe44fdfa9"} Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.044228 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"313f56bf-7b78-4af8-bdce-b386cba8dfcb","Type":"ContainerStarted","Data":"3a41c36b1163f2767103257423950f6cd6ce778ec29c82fc9bfa733e28d1a347"} Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.044299 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"313f56bf-7b78-4af8-bdce-b386cba8dfcb","Type":"ContainerStarted","Data":"833b9e9f85f96c1ff3787d26a2810f948e07abc2c6c2f378ab37f9991a35eea4"} Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.047006 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"45254a92-70be-48cc-950a-683efef559d5","Type":"ContainerDied","Data":"ba5484d669738ec533fd76f47335ca594c025072a64bdc17312a162cfc16293e"} Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.047012 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.047049 4780 scope.go:117] "RemoveContainer" containerID="5910ccedb52d5ea30c8819939f35f81059ca754293d49206e2afa17af16fa3f9" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.053815 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c170a703-f3e7-409c-8102-9ad7915e513c","Type":"ContainerStarted","Data":"223f89eb567c2d58240130b86a4779596bd10ff10e8a8925857133a170d02e92"} Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.053850 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c170a703-f3e7-409c-8102-9ad7915e513c","Type":"ContainerStarted","Data":"4519d540b07463ccd2e62bd6f8711a8f9bca32a91eda9f380eb50bf580381770"} Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.054164 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.055378 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0","Type":"ContainerStarted","Data":"26e8c743604b6425ab6f19f7ac2012c02baf4784b64d01353370fda0cf6f3f15"} Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.077141 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.077116698 podStartE2EDuration="2.077116698s" podCreationTimestamp="2025-12-05 09:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 09:14:32.07274451 +0000 UTC m=+8906.142260852" watchObservedRunningTime="2025-12-05 09:14:32.077116698 +0000 UTC m=+8906.146633030" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.102005 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.10198238 podStartE2EDuration="4.10198238s" podCreationTimestamp="2025-12-05 09:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 09:14:32.09827998 +0000 UTC m=+8906.167796332" watchObservedRunningTime="2025-12-05 09:14:32.10198238 +0000 UTC m=+8906.171498722" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.122438 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.157514 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b7f6c3-470e-43ba-98c7-d474cd9f65b5" path="/var/lib/kubelet/pods/75b7f6c3-470e-43ba-98c7-d474cd9f65b5/volumes" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.158990 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.161599 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 09:14:32 crc kubenswrapper[4780]: E1205 09:14:32.162205 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45254a92-70be-48cc-950a-683efef559d5" containerName="nova-cell0-conductor-conductor" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.162278 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="45254a92-70be-48cc-950a-683efef559d5" containerName="nova-cell0-conductor-conductor" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.162557 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="45254a92-70be-48cc-950a-683efef559d5" containerName="nova-cell0-conductor-conductor" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.163379 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.169802 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.170948 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.309558 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8864df53-14ca-40d5-9200-18c54f92600f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8864df53-14ca-40d5-9200-18c54f92600f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.309683 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtm9b\" (UniqueName: \"kubernetes.io/projected/8864df53-14ca-40d5-9200-18c54f92600f-kube-api-access-gtm9b\") pod \"nova-cell0-conductor-0\" (UID: \"8864df53-14ca-40d5-9200-18c54f92600f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.309912 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8864df53-14ca-40d5-9200-18c54f92600f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8864df53-14ca-40d5-9200-18c54f92600f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.411515 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8864df53-14ca-40d5-9200-18c54f92600f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8864df53-14ca-40d5-9200-18c54f92600f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.411566 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8864df53-14ca-40d5-9200-18c54f92600f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8864df53-14ca-40d5-9200-18c54f92600f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.411630 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtm9b\" (UniqueName: \"kubernetes.io/projected/8864df53-14ca-40d5-9200-18c54f92600f-kube-api-access-gtm9b\") pod \"nova-cell0-conductor-0\" (UID: \"8864df53-14ca-40d5-9200-18c54f92600f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.422918 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8864df53-14ca-40d5-9200-18c54f92600f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8864df53-14ca-40d5-9200-18c54f92600f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.424421 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8864df53-14ca-40d5-9200-18c54f92600f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8864df53-14ca-40d5-9200-18c54f92600f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.433448 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtm9b\" (UniqueName: \"kubernetes.io/projected/8864df53-14ca-40d5-9200-18c54f92600f-kube-api-access-gtm9b\") pod \"nova-cell0-conductor-0\" (UID: \"8864df53-14ca-40d5-9200-18c54f92600f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.490867 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:32 crc kubenswrapper[4780]: I1205 09:14:32.945491 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 09:14:33 crc kubenswrapper[4780]: I1205 09:14:33.073723 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"313f56bf-7b78-4af8-bdce-b386cba8dfcb","Type":"ContainerStarted","Data":"573409d96cdde2fe74deb154f90199fd5a49accefac8ccfd34382636e3dad9fc"} Dec 05 09:14:33 crc kubenswrapper[4780]: I1205 09:14:33.078244 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8864df53-14ca-40d5-9200-18c54f92600f","Type":"ContainerStarted","Data":"125f85e10f9ef749462009935ce5df4251d1d245ffeebfbd3f2f95fcec7048af"} Dec 05 09:14:33 crc kubenswrapper[4780]: I1205 09:14:33.080287 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"286873d6-e3e1-4a17-b5b1-1697e5bcc61e","Type":"ContainerStarted","Data":"61a13f370e22aed3a96b640ef1f63de5ff8c2fad0493c3816cc4ab849014186f"} Dec 05 09:14:33 crc kubenswrapper[4780]: I1205 09:14:33.095603 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.09558431 podStartE2EDuration="3.09558431s" podCreationTimestamp="2025-12-05 09:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 09:14:33.094705396 +0000 UTC m=+8907.164221728" watchObservedRunningTime="2025-12-05 09:14:33.09558431 +0000 UTC m=+8907.165100642" Dec 05 09:14:33 crc kubenswrapper[4780]: I1205 09:14:33.123739 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.12372131 podStartE2EDuration="3.12372131s" podCreationTimestamp="2025-12-05 09:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 09:14:33.117329238 +0000 UTC m=+8907.186845570" watchObservedRunningTime="2025-12-05 09:14:33.12372131 +0000 UTC m=+8907.193237642" Dec 05 09:14:34 crc kubenswrapper[4780]: I1205 09:14:34.095454 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8864df53-14ca-40d5-9200-18c54f92600f","Type":"ContainerStarted","Data":"1aff30e61ca5ef326653e9ec533a5ae143627ba9dea7a4bfc3a04bc4cb9ee005"} Dec 05 09:14:34 crc kubenswrapper[4780]: I1205 09:14:34.096160 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:34 crc kubenswrapper[4780]: I1205 09:14:34.122211 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.122193531 podStartE2EDuration="2.122193531s" podCreationTimestamp="2025-12-05 09:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 09:14:34.111967275 +0000 UTC m=+8908.181483607" watchObservedRunningTime="2025-12-05 09:14:34.122193531 +0000 UTC m=+8908.191709863" Dec 05 09:14:34 crc kubenswrapper[4780]: I1205 09:14:34.153083 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45254a92-70be-48cc-950a-683efef559d5" path="/var/lib/kubelet/pods/45254a92-70be-48cc-950a-683efef559d5/volumes" Dec 05 09:14:34 crc kubenswrapper[4780]: I1205 09:14:34.251292 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 09:14:35 crc kubenswrapper[4780]: I1205 09:14:35.783338 4780 scope.go:117] "RemoveContainer" containerID="baa23f6c618b5c56cb965da46d4cc43f3f8b682b8c857f697bed4822ddae3c1d" Dec 05 09:14:35 crc kubenswrapper[4780]: I1205 09:14:35.846947 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 09:14:35 crc kubenswrapper[4780]: I1205 09:14:35.847030 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 09:14:39 crc kubenswrapper[4780]: I1205 09:14:39.251305 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 09:14:39 crc kubenswrapper[4780]: I1205 09:14:39.280575 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 09:14:40 crc kubenswrapper[4780]: I1205 09:14:40.817272 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 05 09:14:40 crc kubenswrapper[4780]: I1205 09:14:40.829179 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 09:14:40 crc kubenswrapper[4780]: I1205 09:14:40.829236 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 09:14:40 crc kubenswrapper[4780]: I1205 09:14:40.846824 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 09:14:40 crc kubenswrapper[4780]: I1205 09:14:40.847144 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 09:14:40 crc kubenswrapper[4780]: I1205 09:14:40.985006 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 09:14:41 crc kubenswrapper[4780]: I1205 09:14:41.846165 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="313f56bf-7b78-4af8-bdce-b386cba8dfcb" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.178:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 09:14:41 crc kubenswrapper[4780]: I1205 09:14:41.847160 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="313f56bf-7b78-4af8-bdce-b386cba8dfcb" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.178:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 09:14:41 crc kubenswrapper[4780]: I1205 09:14:41.859208 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="286873d6-e3e1-4a17-b5b1-1697e5bcc61e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 09:14:41 crc kubenswrapper[4780]: I1205 09:14:41.859219 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="286873d6-e3e1-4a17-b5b1-1697e5bcc61e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 09:14:42 crc kubenswrapper[4780]: I1205 09:14:42.522208 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 05 09:14:44 crc kubenswrapper[4780]: I1205 09:14:44.140010 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:14:44 crc kubenswrapper[4780]: E1205 09:14:44.140964 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:14:50 crc kubenswrapper[4780]: I1205 09:14:50.834803 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 09:14:50 crc kubenswrapper[4780]: I1205 09:14:50.835676 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 09:14:50 crc kubenswrapper[4780]: I1205 09:14:50.838907 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 09:14:50 crc kubenswrapper[4780]: I1205 09:14:50.840392 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 09:14:50 crc kubenswrapper[4780]: I1205 09:14:50.855530 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 09:14:50 crc kubenswrapper[4780]: I1205 09:14:50.857842 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 09:14:50 crc kubenswrapper[4780]: I1205 09:14:50.861496 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 09:14:51 crc kubenswrapper[4780]: I1205 09:14:51.248617 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 09:14:51 crc kubenswrapper[4780]: I1205 09:14:51.253297 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 09:14:51 crc kubenswrapper[4780]: I1205 09:14:51.254969 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.354145 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4"] Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.355989 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.358663 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.362617 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.362628 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.362686 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.362708 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.363067 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-g59dl" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.363075 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.372999 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4"] Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.527628 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.527682 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.527774 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.527808 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.527869 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.527948 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.528117 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.528187 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.528225 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c42qz\" (UniqueName: \"kubernetes.io/projected/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-kube-api-access-c42qz\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.630654 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.630719 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.630764 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c42qz\" (UniqueName: \"kubernetes.io/projected/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-kube-api-access-c42qz\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.630809 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.630840 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.630871 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.630926 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.630953 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.630978 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.632069 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.636910 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.637443 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.637468 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.637494 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.638103 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.638567 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.639469 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.661578 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c42qz\" (UniqueName: \"kubernetes.io/projected/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-kube-api-access-c42qz\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:52 crc kubenswrapper[4780]: I1205 09:14:52.675633 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:14:53 crc kubenswrapper[4780]: I1205 09:14:53.197277 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4"] Dec 05 09:14:53 crc kubenswrapper[4780]: W1205 09:14:53.198362 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc38ee0d1_13fd_48a8_8e0c_e7864d35d3fb.slice/crio-ecdea26a4389fa83316e7e9196034e2cba3f44acc6988aca7cfc25b9e716c9e4 WatchSource:0}: Error finding container ecdea26a4389fa83316e7e9196034e2cba3f44acc6988aca7cfc25b9e716c9e4: Status 404 returned error can't find the container with id ecdea26a4389fa83316e7e9196034e2cba3f44acc6988aca7cfc25b9e716c9e4 Dec 05 09:14:53 crc kubenswrapper[4780]: I1205 09:14:53.267015 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" event={"ID":"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb","Type":"ContainerStarted","Data":"ecdea26a4389fa83316e7e9196034e2cba3f44acc6988aca7cfc25b9e716c9e4"} Dec 05 09:14:54 crc kubenswrapper[4780]: I1205 09:14:54.279331 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" event={"ID":"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb","Type":"ContainerStarted","Data":"333a10372d5e4d9cf8aa7781f701ef6aac26a9db3e1dee2761cb9015acd619e3"} Dec 05 09:14:54 crc kubenswrapper[4780]: I1205 09:14:54.303430 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" podStartSLOduration=1.596639411 podStartE2EDuration="2.30340969s" podCreationTimestamp="2025-12-05 09:14:52 +0000 UTC" firstStartedPulling="2025-12-05 09:14:53.201482683 +0000 UTC m=+8927.270999005" lastFinishedPulling="2025-12-05 09:14:53.908252952 +0000 UTC m=+8927.977769284" observedRunningTime="2025-12-05 09:14:54.295943439 +0000 UTC m=+8928.365459781" watchObservedRunningTime="2025-12-05 09:14:54.30340969 +0000 UTC m=+8928.372926022" Dec 05 09:14:57 crc kubenswrapper[4780]: I1205 09:14:57.139302 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:14:57 crc kubenswrapper[4780]: E1205 09:14:57.139862 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.149859 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph"] Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.151581 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.154927 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.155816 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.156047 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph"] Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.281695 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-secret-volume\") pod \"collect-profiles-29415435-dwwph\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.281934 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6nd8\" (UniqueName: \"kubernetes.io/projected/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-kube-api-access-x6nd8\") pod \"collect-profiles-29415435-dwwph\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.281973 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-config-volume\") pod \"collect-profiles-29415435-dwwph\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.384536 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6nd8\" (UniqueName: \"kubernetes.io/projected/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-kube-api-access-x6nd8\") pod \"collect-profiles-29415435-dwwph\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.384935 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-config-volume\") pod \"collect-profiles-29415435-dwwph\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.385074 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-secret-volume\") pod \"collect-profiles-29415435-dwwph\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.386120 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-config-volume\") pod \"collect-profiles-29415435-dwwph\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.479151 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6nd8\" (UniqueName: \"kubernetes.io/projected/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-kube-api-access-x6nd8\") pod \"collect-profiles-29415435-dwwph\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.479233 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-secret-volume\") pod \"collect-profiles-29415435-dwwph\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:00 crc kubenswrapper[4780]: I1205 09:15:00.772585 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:01 crc kubenswrapper[4780]: I1205 09:15:01.188585 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph"] Dec 05 09:15:01 crc kubenswrapper[4780]: I1205 09:15:01.350901 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" event={"ID":"28302b2b-3cb6-4dbe-a41c-aaacc32727d9","Type":"ContainerStarted","Data":"5ba2c4bf599eb66b6884861598190a92d313a97d787d6da1bdb69e3718c43f7d"} Dec 05 09:15:02 crc kubenswrapper[4780]: I1205 09:15:02.361799 4780 generic.go:334] "Generic (PLEG): container finished" podID="28302b2b-3cb6-4dbe-a41c-aaacc32727d9" containerID="c0b876341f6ba18fbe31b1878349ad84520fe28ec69781849851b440ae3640d0" exitCode=0 Dec 05 09:15:02 crc kubenswrapper[4780]: I1205 09:15:02.361854 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" event={"ID":"28302b2b-3cb6-4dbe-a41c-aaacc32727d9","Type":"ContainerDied","Data":"c0b876341f6ba18fbe31b1878349ad84520fe28ec69781849851b440ae3640d0"} Dec 05 09:15:03 crc kubenswrapper[4780]: I1205 09:15:03.705673 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:03 crc kubenswrapper[4780]: I1205 09:15:03.861351 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-config-volume\") pod \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " Dec 05 09:15:03 crc kubenswrapper[4780]: I1205 09:15:03.861393 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-secret-volume\") pod \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " Dec 05 09:15:03 crc kubenswrapper[4780]: I1205 09:15:03.861551 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6nd8\" (UniqueName: \"kubernetes.io/projected/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-kube-api-access-x6nd8\") pod \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\" (UID: \"28302b2b-3cb6-4dbe-a41c-aaacc32727d9\") " Dec 05 09:15:03 crc kubenswrapper[4780]: I1205 09:15:03.862370 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-config-volume" (OuterVolumeSpecName: "config-volume") pod "28302b2b-3cb6-4dbe-a41c-aaacc32727d9" (UID: "28302b2b-3cb6-4dbe-a41c-aaacc32727d9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 09:15:03 crc kubenswrapper[4780]: I1205 09:15:03.867284 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "28302b2b-3cb6-4dbe-a41c-aaacc32727d9" (UID: "28302b2b-3cb6-4dbe-a41c-aaacc32727d9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:15:03 crc kubenswrapper[4780]: I1205 09:15:03.867616 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-kube-api-access-x6nd8" (OuterVolumeSpecName: "kube-api-access-x6nd8") pod "28302b2b-3cb6-4dbe-a41c-aaacc32727d9" (UID: "28302b2b-3cb6-4dbe-a41c-aaacc32727d9"). InnerVolumeSpecName "kube-api-access-x6nd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:15:03 crc kubenswrapper[4780]: I1205 09:15:03.964085 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 09:15:03 crc kubenswrapper[4780]: I1205 09:15:03.964123 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 09:15:03 crc kubenswrapper[4780]: I1205 09:15:03.964135 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6nd8\" (UniqueName: \"kubernetes.io/projected/28302b2b-3cb6-4dbe-a41c-aaacc32727d9-kube-api-access-x6nd8\") on node \"crc\" DevicePath \"\"" Dec 05 09:15:04 crc kubenswrapper[4780]: I1205 09:15:04.383921 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" event={"ID":"28302b2b-3cb6-4dbe-a41c-aaacc32727d9","Type":"ContainerDied","Data":"5ba2c4bf599eb66b6884861598190a92d313a97d787d6da1bdb69e3718c43f7d"} Dec 05 09:15:04 crc kubenswrapper[4780]: I1205 09:15:04.383967 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ba2c4bf599eb66b6884861598190a92d313a97d787d6da1bdb69e3718c43f7d" Dec 05 09:15:04 crc kubenswrapper[4780]: I1205 09:15:04.384040 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415435-dwwph" Dec 05 09:15:04 crc kubenswrapper[4780]: I1205 09:15:04.824965 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x"] Dec 05 09:15:04 crc kubenswrapper[4780]: I1205 09:15:04.841886 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415390-lzh8x"] Dec 05 09:15:06 crc kubenswrapper[4780]: I1205 09:15:06.151305 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7efffc02-4ec6-4e9d-ad06-49d23b3acaa9" path="/var/lib/kubelet/pods/7efffc02-4ec6-4e9d-ad06-49d23b3acaa9/volumes" Dec 05 09:15:08 crc kubenswrapper[4780]: I1205 09:15:08.139740 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:15:08 crc kubenswrapper[4780]: E1205 09:15:08.140310 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:15:21 crc kubenswrapper[4780]: I1205 09:15:21.139001 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:15:21 crc kubenswrapper[4780]: E1205 09:15:21.139870 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:15:33 crc kubenswrapper[4780]: I1205 09:15:33.138836 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:15:33 crc kubenswrapper[4780]: E1205 09:15:33.139503 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:15:35 crc kubenswrapper[4780]: I1205 09:15:35.917062 4780 scope.go:117] "RemoveContainer" containerID="0a326c13ebb6083aa7a208590530b6a0b0c6d214dfafd34d3bf43a4dab46dcb5" Dec 05 09:15:47 crc kubenswrapper[4780]: I1205 09:15:47.139536 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:15:47 crc kubenswrapper[4780]: E1205 09:15:47.141214 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:15:59 crc kubenswrapper[4780]: I1205 09:15:59.139168 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:15:59 crc kubenswrapper[4780]: E1205 09:15:59.140156 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:16:13 crc kubenswrapper[4780]: I1205 09:16:13.139321 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:16:13 crc kubenswrapper[4780]: E1205 09:16:13.140164 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:16:25 crc kubenswrapper[4780]: I1205 09:16:25.139862 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:16:25 crc kubenswrapper[4780]: E1205 09:16:25.141230 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:16:36 crc kubenswrapper[4780]: I1205 09:16:36.146276 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:16:36 crc kubenswrapper[4780]: E1205 09:16:36.147243 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:16:47 crc kubenswrapper[4780]: I1205 09:16:47.139991 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:16:47 crc kubenswrapper[4780]: E1205 09:16:47.140763 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:17:00 crc kubenswrapper[4780]: I1205 09:17:00.139620 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:17:00 crc kubenswrapper[4780]: E1205 09:17:00.140754 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:17:14 crc kubenswrapper[4780]: I1205 09:17:14.138737 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:17:14 crc kubenswrapper[4780]: E1205 09:17:14.140231 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:17:27 crc kubenswrapper[4780]: I1205 09:17:27.139666 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:17:27 crc kubenswrapper[4780]: E1205 09:17:27.140664 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:17:38 crc kubenswrapper[4780]: I1205 09:17:38.139231 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:17:38 crc kubenswrapper[4780]: E1205 09:17:38.140034 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:17:49 crc kubenswrapper[4780]: I1205 09:17:49.139550 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:17:49 crc kubenswrapper[4780]: E1205 09:17:49.141509 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.482086 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr7x"] Dec 05 09:17:55 crc kubenswrapper[4780]: E1205 09:17:55.483099 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28302b2b-3cb6-4dbe-a41c-aaacc32727d9" containerName="collect-profiles" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.483114 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="28302b2b-3cb6-4dbe-a41c-aaacc32727d9" containerName="collect-profiles" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.483331 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="28302b2b-3cb6-4dbe-a41c-aaacc32727d9" containerName="collect-profiles" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.484983 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.498326 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr7x"] Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.583197 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-utilities\") pod \"redhat-marketplace-4jr7x\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.583466 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4mv\" (UniqueName: \"kubernetes.io/projected/39312413-6f0c-47d2-bb90-72a837e912fa-kube-api-access-7w4mv\") pod \"redhat-marketplace-4jr7x\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.583779 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-catalog-content\") pod \"redhat-marketplace-4jr7x\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.686120 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-catalog-content\") pod \"redhat-marketplace-4jr7x\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.686271 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-utilities\") pod \"redhat-marketplace-4jr7x\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.686338 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4mv\" (UniqueName: \"kubernetes.io/projected/39312413-6f0c-47d2-bb90-72a837e912fa-kube-api-access-7w4mv\") pod \"redhat-marketplace-4jr7x\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.686709 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-catalog-content\") pod \"redhat-marketplace-4jr7x\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.686810 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-utilities\") pod \"redhat-marketplace-4jr7x\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.705527 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4mv\" (UniqueName: \"kubernetes.io/projected/39312413-6f0c-47d2-bb90-72a837e912fa-kube-api-access-7w4mv\") pod \"redhat-marketplace-4jr7x\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:55 crc kubenswrapper[4780]: I1205 09:17:55.813737 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:17:56 crc kubenswrapper[4780]: I1205 09:17:56.287912 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr7x"] Dec 05 09:17:56 crc kubenswrapper[4780]: I1205 09:17:56.993583 4780 generic.go:334] "Generic (PLEG): container finished" podID="39312413-6f0c-47d2-bb90-72a837e912fa" containerID="888a76d54eacbba61ce407488d5b7936042f6bb88e8d1f400ba02c1d19959d19" exitCode=0 Dec 05 09:17:56 crc kubenswrapper[4780]: I1205 09:17:56.993639 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr7x" event={"ID":"39312413-6f0c-47d2-bb90-72a837e912fa","Type":"ContainerDied","Data":"888a76d54eacbba61ce407488d5b7936042f6bb88e8d1f400ba02c1d19959d19"} Dec 05 09:17:56 crc kubenswrapper[4780]: I1205 09:17:56.993932 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr7x" event={"ID":"39312413-6f0c-47d2-bb90-72a837e912fa","Type":"ContainerStarted","Data":"3b38ad4c8ab2cebbbf555570cd7d5d5ef1f440bc7b38b3d4f751788e79246dce"} Dec 05 09:17:56 crc kubenswrapper[4780]: I1205 09:17:56.995815 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 09:17:58 crc kubenswrapper[4780]: I1205 09:17:58.006085 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr7x" event={"ID":"39312413-6f0c-47d2-bb90-72a837e912fa","Type":"ContainerStarted","Data":"b7946cc55b86514279d3d1ae332fa032c635db6276c01a403ce71016bd037363"} Dec 05 09:17:59 crc kubenswrapper[4780]: I1205 09:17:59.016338 4780 generic.go:334] "Generic (PLEG): container finished" podID="39312413-6f0c-47d2-bb90-72a837e912fa" containerID="b7946cc55b86514279d3d1ae332fa032c635db6276c01a403ce71016bd037363" exitCode=0 Dec 05 09:17:59 crc kubenswrapper[4780]: I1205 09:17:59.016387 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr7x" event={"ID":"39312413-6f0c-47d2-bb90-72a837e912fa","Type":"ContainerDied","Data":"b7946cc55b86514279d3d1ae332fa032c635db6276c01a403ce71016bd037363"} Dec 05 09:18:00 crc kubenswrapper[4780]: I1205 09:18:00.027558 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr7x" event={"ID":"39312413-6f0c-47d2-bb90-72a837e912fa","Type":"ContainerStarted","Data":"032cf33b25fb603b0bac33ac466b916af4abae6fffed0018324bf4eafa432a6b"} Dec 05 09:18:00 crc kubenswrapper[4780]: I1205 09:18:00.048419 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4jr7x" podStartSLOduration=2.419873467 podStartE2EDuration="5.048394146s" podCreationTimestamp="2025-12-05 09:17:55 +0000 UTC" firstStartedPulling="2025-12-05 09:17:56.995590512 +0000 UTC m=+9111.065106844" lastFinishedPulling="2025-12-05 09:17:59.624111191 +0000 UTC m=+9113.693627523" observedRunningTime="2025-12-05 09:18:00.046092375 +0000 UTC m=+9114.115608717" watchObservedRunningTime="2025-12-05 09:18:00.048394146 +0000 UTC m=+9114.117910488" Dec 05 09:18:02 crc kubenswrapper[4780]: I1205 09:18:02.139142 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:18:02 crc kubenswrapper[4780]: E1205 09:18:02.139676 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:18:05 crc kubenswrapper[4780]: I1205 09:18:05.814074 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:18:05 crc kubenswrapper[4780]: I1205 09:18:05.814405 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:18:05 crc kubenswrapper[4780]: I1205 09:18:05.861963 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:18:06 crc kubenswrapper[4780]: I1205 09:18:06.152059 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:18:06 crc kubenswrapper[4780]: I1205 09:18:06.215655 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr7x"] Dec 05 09:18:08 crc kubenswrapper[4780]: I1205 09:18:08.098441 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4jr7x" podUID="39312413-6f0c-47d2-bb90-72a837e912fa" containerName="registry-server" containerID="cri-o://032cf33b25fb603b0bac33ac466b916af4abae6fffed0018324bf4eafa432a6b" gracePeriod=2 Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.111736 4780 generic.go:334] "Generic (PLEG): container finished" podID="39312413-6f0c-47d2-bb90-72a837e912fa" containerID="032cf33b25fb603b0bac33ac466b916af4abae6fffed0018324bf4eafa432a6b" exitCode=0 Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.111781 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr7x" event={"ID":"39312413-6f0c-47d2-bb90-72a837e912fa","Type":"ContainerDied","Data":"032cf33b25fb603b0bac33ac466b916af4abae6fffed0018324bf4eafa432a6b"} Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.112087 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4jr7x" event={"ID":"39312413-6f0c-47d2-bb90-72a837e912fa","Type":"ContainerDied","Data":"3b38ad4c8ab2cebbbf555570cd7d5d5ef1f440bc7b38b3d4f751788e79246dce"} Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.112105 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b38ad4c8ab2cebbbf555570cd7d5d5ef1f440bc7b38b3d4f751788e79246dce" Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.176276 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.306472 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-catalog-content\") pod \"39312413-6f0c-47d2-bb90-72a837e912fa\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.306638 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-utilities\") pod \"39312413-6f0c-47d2-bb90-72a837e912fa\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.306821 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w4mv\" (UniqueName: \"kubernetes.io/projected/39312413-6f0c-47d2-bb90-72a837e912fa-kube-api-access-7w4mv\") pod \"39312413-6f0c-47d2-bb90-72a837e912fa\" (UID: \"39312413-6f0c-47d2-bb90-72a837e912fa\") " Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.307807 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-utilities" (OuterVolumeSpecName: "utilities") pod "39312413-6f0c-47d2-bb90-72a837e912fa" (UID: "39312413-6f0c-47d2-bb90-72a837e912fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.312626 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39312413-6f0c-47d2-bb90-72a837e912fa-kube-api-access-7w4mv" (OuterVolumeSpecName: "kube-api-access-7w4mv") pod "39312413-6f0c-47d2-bb90-72a837e912fa" (UID: "39312413-6f0c-47d2-bb90-72a837e912fa"). InnerVolumeSpecName "kube-api-access-7w4mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.325540 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39312413-6f0c-47d2-bb90-72a837e912fa" (UID: "39312413-6f0c-47d2-bb90-72a837e912fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.408706 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w4mv\" (UniqueName: \"kubernetes.io/projected/39312413-6f0c-47d2-bb90-72a837e912fa-kube-api-access-7w4mv\") on node \"crc\" DevicePath \"\"" Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.408737 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:18:09 crc kubenswrapper[4780]: I1205 09:18:09.408747 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39312413-6f0c-47d2-bb90-72a837e912fa-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:18:10 crc kubenswrapper[4780]: I1205 09:18:10.120170 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4jr7x" Dec 05 09:18:10 crc kubenswrapper[4780]: I1205 09:18:10.160635 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr7x"] Dec 05 09:18:10 crc kubenswrapper[4780]: I1205 09:18:10.171008 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4jr7x"] Dec 05 09:18:12 crc kubenswrapper[4780]: I1205 09:18:12.151185 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39312413-6f0c-47d2-bb90-72a837e912fa" path="/var/lib/kubelet/pods/39312413-6f0c-47d2-bb90-72a837e912fa/volumes" Dec 05 09:18:13 crc kubenswrapper[4780]: I1205 09:18:13.138380 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:18:13 crc kubenswrapper[4780]: E1205 09:18:13.138954 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:18:25 crc kubenswrapper[4780]: I1205 09:18:25.138935 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:18:25 crc kubenswrapper[4780]: E1205 09:18:25.139777 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:18:26 crc kubenswrapper[4780]: I1205 09:18:26.810353 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-frsp8"] Dec 05 09:18:26 crc kubenswrapper[4780]: E1205 09:18:26.810773 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39312413-6f0c-47d2-bb90-72a837e912fa" containerName="registry-server" Dec 05 09:18:26 crc kubenswrapper[4780]: I1205 09:18:26.810787 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="39312413-6f0c-47d2-bb90-72a837e912fa" containerName="registry-server" Dec 05 09:18:26 crc kubenswrapper[4780]: E1205 09:18:26.810816 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39312413-6f0c-47d2-bb90-72a837e912fa" containerName="extract-utilities" Dec 05 09:18:26 crc kubenswrapper[4780]: I1205 09:18:26.810823 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="39312413-6f0c-47d2-bb90-72a837e912fa" containerName="extract-utilities" Dec 05 09:18:26 crc kubenswrapper[4780]: E1205 09:18:26.810842 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39312413-6f0c-47d2-bb90-72a837e912fa" containerName="extract-content" Dec 05 09:18:26 crc kubenswrapper[4780]: I1205 09:18:26.810849 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="39312413-6f0c-47d2-bb90-72a837e912fa" containerName="extract-content" Dec 05 09:18:26 crc kubenswrapper[4780]: I1205 09:18:26.811808 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="39312413-6f0c-47d2-bb90-72a837e912fa" containerName="registry-server" Dec 05 09:18:26 crc kubenswrapper[4780]: I1205 09:18:26.813571 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:26 crc kubenswrapper[4780]: I1205 09:18:26.823294 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-frsp8"] Dec 05 09:18:26 crc kubenswrapper[4780]: I1205 09:18:26.973816 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cbp5\" (UniqueName: \"kubernetes.io/projected/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-kube-api-access-7cbp5\") pod \"certified-operators-frsp8\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:26 crc kubenswrapper[4780]: I1205 09:18:26.974297 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-utilities\") pod \"certified-operators-frsp8\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:26 crc kubenswrapper[4780]: I1205 09:18:26.974323 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-catalog-content\") pod \"certified-operators-frsp8\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:27 crc kubenswrapper[4780]: I1205 09:18:27.076109 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cbp5\" (UniqueName: \"kubernetes.io/projected/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-kube-api-access-7cbp5\") pod \"certified-operators-frsp8\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:27 crc kubenswrapper[4780]: I1205 09:18:27.076166 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-utilities\") pod \"certified-operators-frsp8\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:27 crc kubenswrapper[4780]: I1205 09:18:27.076188 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-catalog-content\") pod \"certified-operators-frsp8\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:27 crc kubenswrapper[4780]: I1205 09:18:27.076629 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-catalog-content\") pod \"certified-operators-frsp8\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:27 crc kubenswrapper[4780]: I1205 09:18:27.076775 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-utilities\") pod \"certified-operators-frsp8\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:27 crc kubenswrapper[4780]: I1205 09:18:27.096861 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cbp5\" (UniqueName: \"kubernetes.io/projected/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-kube-api-access-7cbp5\") pod \"certified-operators-frsp8\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:27 crc kubenswrapper[4780]: I1205 09:18:27.136507 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:27 crc kubenswrapper[4780]: I1205 09:18:27.649342 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-frsp8"] Dec 05 09:18:28 crc kubenswrapper[4780]: I1205 09:18:28.287282 4780 generic.go:334] "Generic (PLEG): container finished" podID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerID="6166c59c718bbe25f012a7f33042f1c2cc23b7cc1bbeabfa916798f9cdfcb0a6" exitCode=0 Dec 05 09:18:28 crc kubenswrapper[4780]: I1205 09:18:28.287383 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frsp8" event={"ID":"1973ad3c-aea5-4b4f-8378-77b4b73c49f4","Type":"ContainerDied","Data":"6166c59c718bbe25f012a7f33042f1c2cc23b7cc1bbeabfa916798f9cdfcb0a6"} Dec 05 09:18:28 crc kubenswrapper[4780]: I1205 09:18:28.287634 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frsp8" event={"ID":"1973ad3c-aea5-4b4f-8378-77b4b73c49f4","Type":"ContainerStarted","Data":"a099f2ef7a3069c9743208c8f0100200fa07043b828aa6052c09f05168039b35"} Dec 05 09:18:29 crc kubenswrapper[4780]: I1205 09:18:29.299378 4780 generic.go:334] "Generic (PLEG): container finished" podID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerID="e096b32bc50d9fc1944d0200b97a2008b088f0a61ca4bedc832b047d7f8628a4" exitCode=0 Dec 05 09:18:29 crc kubenswrapper[4780]: I1205 09:18:29.299463 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frsp8" event={"ID":"1973ad3c-aea5-4b4f-8378-77b4b73c49f4","Type":"ContainerDied","Data":"e096b32bc50d9fc1944d0200b97a2008b088f0a61ca4bedc832b047d7f8628a4"} Dec 05 09:18:31 crc kubenswrapper[4780]: I1205 09:18:31.347991 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frsp8" event={"ID":"1973ad3c-aea5-4b4f-8378-77b4b73c49f4","Type":"ContainerStarted","Data":"93714d879ae3cafc402b727a5b910231b72769b9064685252af2ba517502ee21"} Dec 05 09:18:31 crc kubenswrapper[4780]: I1205 09:18:31.369731 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-frsp8" podStartSLOduration=3.936467929 podStartE2EDuration="5.369710889s" podCreationTimestamp="2025-12-05 09:18:26 +0000 UTC" firstStartedPulling="2025-12-05 09:18:28.289160375 +0000 UTC m=+9142.358676707" lastFinishedPulling="2025-12-05 09:18:29.722403335 +0000 UTC m=+9143.791919667" observedRunningTime="2025-12-05 09:18:31.364938701 +0000 UTC m=+9145.434455043" watchObservedRunningTime="2025-12-05 09:18:31.369710889 +0000 UTC m=+9145.439227221" Dec 05 09:18:37 crc kubenswrapper[4780]: I1205 09:18:37.138123 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:37 crc kubenswrapper[4780]: I1205 09:18:37.138977 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:37 crc kubenswrapper[4780]: I1205 09:18:37.190984 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:37 crc kubenswrapper[4780]: I1205 09:18:37.485670 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:37 crc kubenswrapper[4780]: I1205 09:18:37.555700 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-frsp8"] Dec 05 09:18:39 crc kubenswrapper[4780]: I1205 09:18:39.455328 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-frsp8" podUID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerName="registry-server" containerID="cri-o://93714d879ae3cafc402b727a5b910231b72769b9064685252af2ba517502ee21" gracePeriod=2 Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.139969 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:18:40 crc kubenswrapper[4780]: E1205 09:18:40.140551 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.469853 4780 generic.go:334] "Generic (PLEG): container finished" podID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerID="93714d879ae3cafc402b727a5b910231b72769b9064685252af2ba517502ee21" exitCode=0 Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.469912 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frsp8" event={"ID":"1973ad3c-aea5-4b4f-8378-77b4b73c49f4","Type":"ContainerDied","Data":"93714d879ae3cafc402b727a5b910231b72769b9064685252af2ba517502ee21"} Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.659063 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.752365 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cbp5\" (UniqueName: \"kubernetes.io/projected/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-kube-api-access-7cbp5\") pod \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.752598 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-utilities\") pod \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.752839 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-catalog-content\") pod \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\" (UID: \"1973ad3c-aea5-4b4f-8378-77b4b73c49f4\") " Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.753257 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-utilities" (OuterVolumeSpecName: "utilities") pod "1973ad3c-aea5-4b4f-8378-77b4b73c49f4" (UID: "1973ad3c-aea5-4b4f-8378-77b4b73c49f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.753538 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.759754 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-kube-api-access-7cbp5" (OuterVolumeSpecName: "kube-api-access-7cbp5") pod "1973ad3c-aea5-4b4f-8378-77b4b73c49f4" (UID: "1973ad3c-aea5-4b4f-8378-77b4b73c49f4"). InnerVolumeSpecName "kube-api-access-7cbp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.814221 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1973ad3c-aea5-4b4f-8378-77b4b73c49f4" (UID: "1973ad3c-aea5-4b4f-8378-77b4b73c49f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.855304 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:18:40 crc kubenswrapper[4780]: I1205 09:18:40.855637 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cbp5\" (UniqueName: \"kubernetes.io/projected/1973ad3c-aea5-4b4f-8378-77b4b73c49f4-kube-api-access-7cbp5\") on node \"crc\" DevicePath \"\"" Dec 05 09:18:41 crc kubenswrapper[4780]: I1205 09:18:41.481573 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frsp8" event={"ID":"1973ad3c-aea5-4b4f-8378-77b4b73c49f4","Type":"ContainerDied","Data":"a099f2ef7a3069c9743208c8f0100200fa07043b828aa6052c09f05168039b35"} Dec 05 09:18:41 crc kubenswrapper[4780]: I1205 09:18:41.481960 4780 scope.go:117] "RemoveContainer" containerID="93714d879ae3cafc402b727a5b910231b72769b9064685252af2ba517502ee21" Dec 05 09:18:41 crc kubenswrapper[4780]: I1205 09:18:41.481681 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frsp8" Dec 05 09:18:41 crc kubenswrapper[4780]: I1205 09:18:41.508723 4780 scope.go:117] "RemoveContainer" containerID="e096b32bc50d9fc1944d0200b97a2008b088f0a61ca4bedc832b047d7f8628a4" Dec 05 09:18:41 crc kubenswrapper[4780]: I1205 09:18:41.523330 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-frsp8"] Dec 05 09:18:41 crc kubenswrapper[4780]: I1205 09:18:41.534615 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-frsp8"] Dec 05 09:18:41 crc kubenswrapper[4780]: I1205 09:18:41.880698 4780 scope.go:117] "RemoveContainer" containerID="6166c59c718bbe25f012a7f33042f1c2cc23b7cc1bbeabfa916798f9cdfcb0a6" Dec 05 09:18:42 crc kubenswrapper[4780]: I1205 09:18:42.149802 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" path="/var/lib/kubelet/pods/1973ad3c-aea5-4b4f-8378-77b4b73c49f4/volumes" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.139856 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:18:53 crc kubenswrapper[4780]: E1205 09:18:53.140753 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.234256 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qdbz4"] Dec 05 09:18:53 crc kubenswrapper[4780]: E1205 09:18:53.235178 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerName="registry-server" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.235198 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerName="registry-server" Dec 05 09:18:53 crc kubenswrapper[4780]: E1205 09:18:53.235206 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerName="extract-utilities" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.235212 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerName="extract-utilities" Dec 05 09:18:53 crc kubenswrapper[4780]: E1205 09:18:53.235242 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerName="extract-content" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.235248 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerName="extract-content" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.235423 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1973ad3c-aea5-4b4f-8378-77b4b73c49f4" containerName="registry-server" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.237132 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.250983 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdbz4"] Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.405173 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwndb\" (UniqueName: \"kubernetes.io/projected/619a8560-119e-436c-ab68-1f5daa87947b-kube-api-access-bwndb\") pod \"community-operators-qdbz4\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.405296 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-utilities\") pod \"community-operators-qdbz4\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.405372 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-catalog-content\") pod \"community-operators-qdbz4\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.506864 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwndb\" (UniqueName: \"kubernetes.io/projected/619a8560-119e-436c-ab68-1f5daa87947b-kube-api-access-bwndb\") pod \"community-operators-qdbz4\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.506970 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-utilities\") pod \"community-operators-qdbz4\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.507043 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-catalog-content\") pod \"community-operators-qdbz4\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.507617 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-catalog-content\") pod \"community-operators-qdbz4\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.507619 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-utilities\") pod \"community-operators-qdbz4\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.529239 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwndb\" (UniqueName: \"kubernetes.io/projected/619a8560-119e-436c-ab68-1f5daa87947b-kube-api-access-bwndb\") pod \"community-operators-qdbz4\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:53 crc kubenswrapper[4780]: I1205 09:18:53.565220 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:18:54 crc kubenswrapper[4780]: I1205 09:18:54.157465 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdbz4"] Dec 05 09:18:54 crc kubenswrapper[4780]: I1205 09:18:54.588961 4780 generic.go:334] "Generic (PLEG): container finished" podID="619a8560-119e-436c-ab68-1f5daa87947b" containerID="a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944" exitCode=0 Dec 05 09:18:54 crc kubenswrapper[4780]: I1205 09:18:54.589078 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdbz4" event={"ID":"619a8560-119e-436c-ab68-1f5daa87947b","Type":"ContainerDied","Data":"a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944"} Dec 05 09:18:54 crc kubenswrapper[4780]: I1205 09:18:54.589330 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdbz4" event={"ID":"619a8560-119e-436c-ab68-1f5daa87947b","Type":"ContainerStarted","Data":"9b8234cc2dc3ea88ac5a085b15c7c35c637b361fc52d6e2c2e8893b2c1349aca"} Dec 05 09:18:55 crc kubenswrapper[4780]: I1205 09:18:55.602480 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdbz4" event={"ID":"619a8560-119e-436c-ab68-1f5daa87947b","Type":"ContainerStarted","Data":"f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a"} Dec 05 09:18:56 crc kubenswrapper[4780]: I1205 09:18:56.613351 4780 generic.go:334] "Generic (PLEG): container finished" podID="619a8560-119e-436c-ab68-1f5daa87947b" containerID="f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a" exitCode=0 Dec 05 09:18:56 crc kubenswrapper[4780]: I1205 09:18:56.614471 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdbz4" event={"ID":"619a8560-119e-436c-ab68-1f5daa87947b","Type":"ContainerDied","Data":"f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a"} Dec 05 09:18:57 crc kubenswrapper[4780]: I1205 09:18:57.625592 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdbz4" event={"ID":"619a8560-119e-436c-ab68-1f5daa87947b","Type":"ContainerStarted","Data":"140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851"} Dec 05 09:18:57 crc kubenswrapper[4780]: I1205 09:18:57.647790 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qdbz4" podStartSLOduration=2.226565656 podStartE2EDuration="4.647767593s" podCreationTimestamp="2025-12-05 09:18:53 +0000 UTC" firstStartedPulling="2025-12-05 09:18:54.59060898 +0000 UTC m=+9168.660125312" lastFinishedPulling="2025-12-05 09:18:57.011810917 +0000 UTC m=+9171.081327249" observedRunningTime="2025-12-05 09:18:57.642314885 +0000 UTC m=+9171.711831227" watchObservedRunningTime="2025-12-05 09:18:57.647767593 +0000 UTC m=+9171.717283925" Dec 05 09:19:03 crc kubenswrapper[4780]: I1205 09:19:03.565942 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:19:03 crc kubenswrapper[4780]: I1205 09:19:03.566469 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:19:03 crc kubenswrapper[4780]: I1205 09:19:03.612956 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:19:03 crc kubenswrapper[4780]: I1205 09:19:03.727623 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:19:03 crc kubenswrapper[4780]: I1205 09:19:03.866769 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdbz4"] Dec 05 09:19:05 crc kubenswrapper[4780]: I1205 09:19:05.701736 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qdbz4" podUID="619a8560-119e-436c-ab68-1f5daa87947b" containerName="registry-server" containerID="cri-o://140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851" gracePeriod=2 Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.124106 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.259935 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-catalog-content\") pod \"619a8560-119e-436c-ab68-1f5daa87947b\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.260129 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwndb\" (UniqueName: \"kubernetes.io/projected/619a8560-119e-436c-ab68-1f5daa87947b-kube-api-access-bwndb\") pod \"619a8560-119e-436c-ab68-1f5daa87947b\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.260182 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-utilities\") pod \"619a8560-119e-436c-ab68-1f5daa87947b\" (UID: \"619a8560-119e-436c-ab68-1f5daa87947b\") " Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.262695 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-utilities" (OuterVolumeSpecName: "utilities") pod "619a8560-119e-436c-ab68-1f5daa87947b" (UID: "619a8560-119e-436c-ab68-1f5daa87947b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.272791 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619a8560-119e-436c-ab68-1f5daa87947b-kube-api-access-bwndb" (OuterVolumeSpecName: "kube-api-access-bwndb") pod "619a8560-119e-436c-ab68-1f5daa87947b" (UID: "619a8560-119e-436c-ab68-1f5daa87947b"). InnerVolumeSpecName "kube-api-access-bwndb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.315832 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "619a8560-119e-436c-ab68-1f5daa87947b" (UID: "619a8560-119e-436c-ab68-1f5daa87947b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.363952 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.363991 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwndb\" (UniqueName: \"kubernetes.io/projected/619a8560-119e-436c-ab68-1f5daa87947b-kube-api-access-bwndb\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.364002 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619a8560-119e-436c-ab68-1f5daa87947b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.713497 4780 generic.go:334] "Generic (PLEG): container finished" podID="619a8560-119e-436c-ab68-1f5daa87947b" containerID="140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851" exitCode=0 Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.713562 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdbz4" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.713567 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdbz4" event={"ID":"619a8560-119e-436c-ab68-1f5daa87947b","Type":"ContainerDied","Data":"140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851"} Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.714052 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdbz4" event={"ID":"619a8560-119e-436c-ab68-1f5daa87947b","Type":"ContainerDied","Data":"9b8234cc2dc3ea88ac5a085b15c7c35c637b361fc52d6e2c2e8893b2c1349aca"} Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.714072 4780 scope.go:117] "RemoveContainer" containerID="140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.738963 4780 scope.go:117] "RemoveContainer" containerID="f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.749079 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdbz4"] Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.758913 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qdbz4"] Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.788453 4780 scope.go:117] "RemoveContainer" containerID="a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.810657 4780 scope.go:117] "RemoveContainer" containerID="140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851" Dec 05 09:19:06 crc kubenswrapper[4780]: E1205 09:19:06.811320 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851\": container with ID starting with 140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851 not found: ID does not exist" containerID="140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.811355 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851"} err="failed to get container status \"140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851\": rpc error: code = NotFound desc = could not find container \"140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851\": container with ID starting with 140136deaa0332f23402b73bf8f6dbb7625b31166b187e9f1581278043724851 not found: ID does not exist" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.811376 4780 scope.go:117] "RemoveContainer" containerID="f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a" Dec 05 09:19:06 crc kubenswrapper[4780]: E1205 09:19:06.811906 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a\": container with ID starting with f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a not found: ID does not exist" containerID="f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.811955 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a"} err="failed to get container status \"f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a\": rpc error: code = NotFound desc = could not find container \"f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a\": container with ID starting with f8271eaffdaed5eac424168b4fc50c1962d5c027bf9848c62a99aa41615b2f7a not found: ID does not exist" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.811987 4780 scope.go:117] "RemoveContainer" containerID="a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944" Dec 05 09:19:06 crc kubenswrapper[4780]: E1205 09:19:06.812278 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944\": container with ID starting with a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944 not found: ID does not exist" containerID="a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944" Dec 05 09:19:06 crc kubenswrapper[4780]: I1205 09:19:06.812405 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944"} err="failed to get container status \"a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944\": rpc error: code = NotFound desc = could not find container \"a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944\": container with ID starting with a1ffa7cb42b256c7603d630a6fe1cca920291d2923ec6943c5f427233cc9f944 not found: ID does not exist" Dec 05 09:19:08 crc kubenswrapper[4780]: I1205 09:19:08.139341 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:19:08 crc kubenswrapper[4780]: I1205 09:19:08.211029 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619a8560-119e-436c-ab68-1f5daa87947b" path="/var/lib/kubelet/pods/619a8560-119e-436c-ab68-1f5daa87947b/volumes" Dec 05 09:19:09 crc kubenswrapper[4780]: I1205 09:19:09.748783 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"a53dface95dac8488cff4a0130d25a47bde2c1dd141aef593525271abf3e5b09"} Dec 05 09:19:22 crc kubenswrapper[4780]: I1205 09:19:22.859103 4780 generic.go:334] "Generic (PLEG): container finished" podID="c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" containerID="333a10372d5e4d9cf8aa7781f701ef6aac26a9db3e1dee2761cb9015acd619e3" exitCode=0 Dec 05 09:19:22 crc kubenswrapper[4780]: I1205 09:19:22.859189 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" event={"ID":"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb","Type":"ContainerDied","Data":"333a10372d5e4d9cf8aa7781f701ef6aac26a9db3e1dee2761cb9015acd619e3"} Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.274616 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.405239 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-combined-ca-bundle\") pod \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.405310 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-0\") pod \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.405479 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-1\") pod \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.405534 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-1\") pod \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.405571 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cells-global-config-0\") pod \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.405598 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c42qz\" (UniqueName: \"kubernetes.io/projected/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-kube-api-access-c42qz\") pod \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.405648 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-ssh-key\") pod \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.405690 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-inventory\") pod \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.405810 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-0\") pod \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\" (UID: \"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb\") " Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.411759 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-kube-api-access-c42qz" (OuterVolumeSpecName: "kube-api-access-c42qz") pod "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" (UID: "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb"). InnerVolumeSpecName "kube-api-access-c42qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.420244 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" (UID: "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.435045 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" (UID: "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.437079 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" (UID: "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.444827 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" (UID: "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.445140 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-inventory" (OuterVolumeSpecName: "inventory") pod "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" (UID: "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.446988 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" (UID: "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.457083 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" (UID: "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.458912 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" (UID: "c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.508243 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.508273 4780 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.508283 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.508292 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c42qz\" (UniqueName: \"kubernetes.io/projected/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-kube-api-access-c42qz\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.508304 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.508313 4780 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.508322 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.508330 4780 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.508339 4780 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.879756 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" event={"ID":"c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb","Type":"ContainerDied","Data":"ecdea26a4389fa83316e7e9196034e2cba3f44acc6988aca7cfc25b9e716c9e4"} Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.879793 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecdea26a4389fa83316e7e9196034e2cba3f44acc6988aca7cfc25b9e716c9e4" Dec 05 09:19:24 crc kubenswrapper[4780]: I1205 09:19:24.879849 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4" Dec 05 09:19:36 crc kubenswrapper[4780]: I1205 09:19:36.015464 4780 scope.go:117] "RemoveContainer" containerID="9c2f84782d0f4753e50ca2e337e01de481ee575e4da74ddb2e07608d2d6ad684" Dec 05 09:19:36 crc kubenswrapper[4780]: I1205 09:19:36.041490 4780 scope.go:117] "RemoveContainer" containerID="7c68aa60f6b2533bde626d08711f18e45bca3321993b404476462d2071b3e605" Dec 05 09:19:36 crc kubenswrapper[4780]: I1205 09:19:36.096833 4780 scope.go:117] "RemoveContainer" containerID="57bfdeaa072953d4cc947eb2ce6d9a3094cd8d1c3bbedbb153d36c3abd705bc3" Dec 05 09:21:06 crc kubenswrapper[4780]: I1205 09:21:06.238547 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Dec 05 09:21:06 crc kubenswrapper[4780]: I1205 09:21:06.239464 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="1049d285-fbfa-474c-9e0a-dd1fa5f7eca3" containerName="adoption" containerID="cri-o://873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c" gracePeriod=30 Dec 05 09:21:29 crc kubenswrapper[4780]: I1205 09:21:29.908206 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:21:29 crc kubenswrapper[4780]: I1205 09:21:29.908793 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:21:36 crc kubenswrapper[4780]: I1205 09:21:36.756465 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 05 09:21:36 crc kubenswrapper[4780]: I1205 09:21:36.824296 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-320e09c3-3525-4433-b297-486572f74f95\") pod \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\" (UID: \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\") " Dec 05 09:21:36 crc kubenswrapper[4780]: I1205 09:21:36.824462 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q44mj\" (UniqueName: \"kubernetes.io/projected/1049d285-fbfa-474c-9e0a-dd1fa5f7eca3-kube-api-access-q44mj\") pod \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\" (UID: \"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3\") " Dec 05 09:21:36 crc kubenswrapper[4780]: I1205 09:21:36.831856 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1049d285-fbfa-474c-9e0a-dd1fa5f7eca3-kube-api-access-q44mj" (OuterVolumeSpecName: "kube-api-access-q44mj") pod "1049d285-fbfa-474c-9e0a-dd1fa5f7eca3" (UID: "1049d285-fbfa-474c-9e0a-dd1fa5f7eca3"). InnerVolumeSpecName "kube-api-access-q44mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:21:36 crc kubenswrapper[4780]: I1205 09:21:36.847494 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-320e09c3-3525-4433-b297-486572f74f95" (OuterVolumeSpecName: "mariadb-data") pod "1049d285-fbfa-474c-9e0a-dd1fa5f7eca3" (UID: "1049d285-fbfa-474c-9e0a-dd1fa5f7eca3"). InnerVolumeSpecName "pvc-320e09c3-3525-4433-b297-486572f74f95". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 09:21:36 crc kubenswrapper[4780]: I1205 09:21:36.926964 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-320e09c3-3525-4433-b297-486572f74f95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-320e09c3-3525-4433-b297-486572f74f95\") on node \"crc\" " Dec 05 09:21:36 crc kubenswrapper[4780]: I1205 09:21:36.927007 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q44mj\" (UniqueName: \"kubernetes.io/projected/1049d285-fbfa-474c-9e0a-dd1fa5f7eca3-kube-api-access-q44mj\") on node \"crc\" DevicePath \"\"" Dec 05 09:21:36 crc kubenswrapper[4780]: I1205 09:21:36.951793 4780 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 05 09:21:36 crc kubenswrapper[4780]: I1205 09:21:36.951976 4780 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-320e09c3-3525-4433-b297-486572f74f95" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-320e09c3-3525-4433-b297-486572f74f95") on node "crc" Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.029254 4780 reconciler_common.go:293] "Volume detached for volume \"pvc-320e09c3-3525-4433-b297-486572f74f95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-320e09c3-3525-4433-b297-486572f74f95\") on node \"crc\" DevicePath \"\"" Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.060042 4780 generic.go:334] "Generic (PLEG): container finished" podID="1049d285-fbfa-474c-9e0a-dd1fa5f7eca3" containerID="873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c" exitCode=137 Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.060086 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.060130 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3","Type":"ContainerDied","Data":"873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c"} Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.060180 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"1049d285-fbfa-474c-9e0a-dd1fa5f7eca3","Type":"ContainerDied","Data":"8ece86ac341f0b738ac36e7319f38bc1af0e1112099ff92205602a045262b0aa"} Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.060201 4780 scope.go:117] "RemoveContainer" containerID="873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c" Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.102385 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.105909 4780 scope.go:117] "RemoveContainer" containerID="873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c" Dec 05 09:21:37 crc kubenswrapper[4780]: E1205 09:21:37.106388 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c\": container with ID starting with 873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c not found: ID does not exist" containerID="873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c" Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.106427 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c"} err="failed to get container status \"873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c\": rpc error: code = NotFound desc = could not find container \"873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c\": container with ID starting with 873e3d15c5a0e7d565383dab0fe915a08262c36304b6b465a4aa2f02a1f64c6c not found: ID does not exist" Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.113073 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.676554 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Dec 05 09:21:37 crc kubenswrapper[4780]: I1205 09:21:37.677175 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="b26313a3-240f-4139-87cd-8002f9f36c02" containerName="adoption" containerID="cri-o://0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac" gracePeriod=30 Dec 05 09:21:38 crc kubenswrapper[4780]: I1205 09:21:38.151791 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1049d285-fbfa-474c-9e0a-dd1fa5f7eca3" path="/var/lib/kubelet/pods/1049d285-fbfa-474c-9e0a-dd1fa5f7eca3/volumes" Dec 05 09:21:59 crc kubenswrapper[4780]: I1205 09:21:59.908470 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:21:59 crc kubenswrapper[4780]: I1205 09:21:59.909020 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.190913 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.282475 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5hpt\" (UniqueName: \"kubernetes.io/projected/b26313a3-240f-4139-87cd-8002f9f36c02-kube-api-access-c5hpt\") pod \"b26313a3-240f-4139-87cd-8002f9f36c02\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.283407 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\") pod \"b26313a3-240f-4139-87cd-8002f9f36c02\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.284562 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/b26313a3-240f-4139-87cd-8002f9f36c02-ovn-data-cert\") pod \"b26313a3-240f-4139-87cd-8002f9f36c02\" (UID: \"b26313a3-240f-4139-87cd-8002f9f36c02\") " Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.288851 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26313a3-240f-4139-87cd-8002f9f36c02-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "b26313a3-240f-4139-87cd-8002f9f36c02" (UID: "b26313a3-240f-4139-87cd-8002f9f36c02"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.289858 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26313a3-240f-4139-87cd-8002f9f36c02-kube-api-access-c5hpt" (OuterVolumeSpecName: "kube-api-access-c5hpt") pod "b26313a3-240f-4139-87cd-8002f9f36c02" (UID: "b26313a3-240f-4139-87cd-8002f9f36c02"). InnerVolumeSpecName "kube-api-access-c5hpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.303843 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9" (OuterVolumeSpecName: "ovn-data") pod "b26313a3-240f-4139-87cd-8002f9f36c02" (UID: "b26313a3-240f-4139-87cd-8002f9f36c02"). InnerVolumeSpecName "pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.336865 4780 generic.go:334] "Generic (PLEG): container finished" podID="b26313a3-240f-4139-87cd-8002f9f36c02" containerID="0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac" exitCode=137 Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.336981 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"b26313a3-240f-4139-87cd-8002f9f36c02","Type":"ContainerDied","Data":"0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac"} Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.337013 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"b26313a3-240f-4139-87cd-8002f9f36c02","Type":"ContainerDied","Data":"b1cc14b1f513727d5e576b53539e3d163178d86b08cf0f428838be01868eb690"} Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.337031 4780 scope.go:117] "RemoveContainer" containerID="0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.337183 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.388057 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5hpt\" (UniqueName: \"kubernetes.io/projected/b26313a3-240f-4139-87cd-8002f9f36c02-kube-api-access-c5hpt\") on node \"crc\" DevicePath \"\"" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.388106 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\") on node \"crc\" " Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.388118 4780 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/b26313a3-240f-4139-87cd-8002f9f36c02-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.401123 4780 scope.go:117] "RemoveContainer" containerID="0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac" Dec 05 09:22:08 crc kubenswrapper[4780]: E1205 09:22:08.402193 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac\": container with ID starting with 0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac not found: ID does not exist" containerID="0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.402224 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac"} err="failed to get container status \"0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac\": rpc error: code = NotFound desc = could not find container \"0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac\": container with ID starting with 0450c181bc5f7fbee85b2dd3125bae41906ab7005f39ecd5a06371a2d006e5ac not found: ID does not exist" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.411860 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.421474 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.425097 4780 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.425268 4780 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9") on node "crc" Dec 05 09:22:08 crc kubenswrapper[4780]: I1205 09:22:08.490505 4780 reconciler_common.go:293] "Volume detached for volume \"pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c2960b4-3ea5-472e-a4a2-dd7e176edec9\") on node \"crc\" DevicePath \"\"" Dec 05 09:22:10 crc kubenswrapper[4780]: I1205 09:22:10.149671 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26313a3-240f-4139-87cd-8002f9f36c02" path="/var/lib/kubelet/pods/b26313a3-240f-4139-87cd-8002f9f36c02/volumes" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.095769 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 05 09:22:28 crc kubenswrapper[4780]: E1205 09:22:28.096668 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1049d285-fbfa-474c-9e0a-dd1fa5f7eca3" containerName="adoption" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.096682 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="1049d285-fbfa-474c-9e0a-dd1fa5f7eca3" containerName="adoption" Dec 05 09:22:28 crc kubenswrapper[4780]: E1205 09:22:28.096698 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619a8560-119e-436c-ab68-1f5daa87947b" containerName="extract-content" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.096704 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="619a8560-119e-436c-ab68-1f5daa87947b" containerName="extract-content" Dec 05 09:22:28 crc kubenswrapper[4780]: E1205 09:22:28.096728 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.096736 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Dec 05 09:22:28 crc kubenswrapper[4780]: E1205 09:22:28.096756 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619a8560-119e-436c-ab68-1f5daa87947b" containerName="registry-server" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.096762 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="619a8560-119e-436c-ab68-1f5daa87947b" containerName="registry-server" Dec 05 09:22:28 crc kubenswrapper[4780]: E1205 09:22:28.096776 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26313a3-240f-4139-87cd-8002f9f36c02" containerName="adoption" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.096783 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26313a3-240f-4139-87cd-8002f9f36c02" containerName="adoption" Dec 05 09:22:28 crc kubenswrapper[4780]: E1205 09:22:28.096795 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619a8560-119e-436c-ab68-1f5daa87947b" containerName="extract-utilities" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.096801 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="619a8560-119e-436c-ab68-1f5daa87947b" containerName="extract-utilities" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.097035 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26313a3-240f-4139-87cd-8002f9f36c02" containerName="adoption" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.097055 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="1049d285-fbfa-474c-9e0a-dd1fa5f7eca3" containerName="adoption" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.097078 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="619a8560-119e-436c-ab68-1f5daa87947b" containerName="registry-server" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.097089 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.097855 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.099547 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.099689 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j5kk9" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.100358 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.102999 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.110116 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.175270 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.175343 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.175467 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.175497 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.175588 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.175665 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.175691 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnmzn\" (UniqueName: \"kubernetes.io/projected/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-kube-api-access-xnmzn\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.175719 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-config-data\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.175740 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.277787 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.277901 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.277928 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnmzn\" (UniqueName: \"kubernetes.io/projected/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-kube-api-access-xnmzn\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.277959 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-config-data\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.277979 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.278078 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.278138 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.278195 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.278227 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.279075 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.279147 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.279767 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.279996 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.280360 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-config-data\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.283837 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.284172 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.285361 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.309249 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnmzn\" (UniqueName: \"kubernetes.io/projected/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-kube-api-access-xnmzn\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.310634 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.432612 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 09:22:28 crc kubenswrapper[4780]: I1205 09:22:28.878910 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 05 09:22:29 crc kubenswrapper[4780]: I1205 09:22:29.558222 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5d2ce5fa-2138-48bd-9af7-76d136e21dfe","Type":"ContainerStarted","Data":"48f629d7951386eea32ea2404e3b6ba624f5954571799b81e79177cec6fe108d"} Dec 05 09:22:29 crc kubenswrapper[4780]: I1205 09:22:29.907959 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:22:29 crc kubenswrapper[4780]: I1205 09:22:29.908016 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:22:29 crc kubenswrapper[4780]: I1205 09:22:29.908058 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 09:22:29 crc kubenswrapper[4780]: I1205 09:22:29.908908 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a53dface95dac8488cff4a0130d25a47bde2c1dd141aef593525271abf3e5b09"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 09:22:29 crc kubenswrapper[4780]: I1205 09:22:29.908984 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://a53dface95dac8488cff4a0130d25a47bde2c1dd141aef593525271abf3e5b09" gracePeriod=600 Dec 05 09:22:30 crc kubenswrapper[4780]: I1205 09:22:30.577406 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="a53dface95dac8488cff4a0130d25a47bde2c1dd141aef593525271abf3e5b09" exitCode=0 Dec 05 09:22:30 crc kubenswrapper[4780]: I1205 09:22:30.577592 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"a53dface95dac8488cff4a0130d25a47bde2c1dd141aef593525271abf3e5b09"} Dec 05 09:22:30 crc kubenswrapper[4780]: I1205 09:22:30.578178 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733"} Dec 05 09:22:30 crc kubenswrapper[4780]: I1205 09:22:30.578219 4780 scope.go:117] "RemoveContainer" containerID="e34ea390b0b673424281821d8c36b748c0684587785873fe6f74617df5508847" Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.739227 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8pzxs"] Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.742085 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.754116 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8pzxs"] Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.794088 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m78v6\" (UniqueName: \"kubernetes.io/projected/c9801add-61f6-4440-9256-fe30e6265c83-kube-api-access-m78v6\") pod \"redhat-operators-8pzxs\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.794248 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-catalog-content\") pod \"redhat-operators-8pzxs\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.794307 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-utilities\") pod \"redhat-operators-8pzxs\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.896449 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m78v6\" (UniqueName: \"kubernetes.io/projected/c9801add-61f6-4440-9256-fe30e6265c83-kube-api-access-m78v6\") pod \"redhat-operators-8pzxs\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.896557 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-catalog-content\") pod \"redhat-operators-8pzxs\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.896608 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-utilities\") pod \"redhat-operators-8pzxs\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.897223 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-utilities\") pod \"redhat-operators-8pzxs\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.897939 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-catalog-content\") pod \"redhat-operators-8pzxs\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:10 crc kubenswrapper[4780]: I1205 09:23:10.972140 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m78v6\" (UniqueName: \"kubernetes.io/projected/c9801add-61f6-4440-9256-fe30e6265c83-kube-api-access-m78v6\") pod \"redhat-operators-8pzxs\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:11 crc kubenswrapper[4780]: I1205 09:23:11.066037 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:13 crc kubenswrapper[4780]: E1205 09:23:13.328940 4780 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:65066e8ca260a75886ae57f157049605" Dec 05 09:23:13 crc kubenswrapper[4780]: E1205 09:23:13.329418 4780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:65066e8ca260a75886ae57f157049605" Dec 05 09:23:13 crc kubenswrapper[4780]: E1205 09:23:13.329630 4780 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:65066e8ca260a75886ae57f157049605,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnmzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(5d2ce5fa-2138-48bd-9af7-76d136e21dfe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 09:23:13 crc kubenswrapper[4780]: E1205 09:23:13.331052 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="5d2ce5fa-2138-48bd-9af7-76d136e21dfe" Dec 05 09:23:13 crc kubenswrapper[4780]: I1205 09:23:13.733182 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8pzxs"] Dec 05 09:23:14 crc kubenswrapper[4780]: I1205 09:23:14.034637 4780 generic.go:334] "Generic (PLEG): container finished" podID="c9801add-61f6-4440-9256-fe30e6265c83" containerID="bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657" exitCode=0 Dec 05 09:23:14 crc kubenswrapper[4780]: I1205 09:23:14.034682 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pzxs" event={"ID":"c9801add-61f6-4440-9256-fe30e6265c83","Type":"ContainerDied","Data":"bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657"} Dec 05 09:23:14 crc kubenswrapper[4780]: I1205 09:23:14.034737 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pzxs" event={"ID":"c9801add-61f6-4440-9256-fe30e6265c83","Type":"ContainerStarted","Data":"5bc86e45e96f9e4951bd97e475058e71fcad414c98a0b5c637f88d5112f75f14"} Dec 05 09:23:14 crc kubenswrapper[4780]: E1205 09:23:14.036713 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:65066e8ca260a75886ae57f157049605\\\"\"" pod="openstack/tempest-tests-tempest" podUID="5d2ce5fa-2138-48bd-9af7-76d136e21dfe" Dec 05 09:23:14 crc kubenswrapper[4780]: I1205 09:23:14.037217 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 09:23:16 crc kubenswrapper[4780]: I1205 09:23:16.057442 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pzxs" event={"ID":"c9801add-61f6-4440-9256-fe30e6265c83","Type":"ContainerStarted","Data":"aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61"} Dec 05 09:23:19 crc kubenswrapper[4780]: I1205 09:23:19.087958 4780 generic.go:334] "Generic (PLEG): container finished" podID="c9801add-61f6-4440-9256-fe30e6265c83" containerID="aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61" exitCode=0 Dec 05 09:23:19 crc kubenswrapper[4780]: I1205 09:23:19.088060 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pzxs" event={"ID":"c9801add-61f6-4440-9256-fe30e6265c83","Type":"ContainerDied","Data":"aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61"} Dec 05 09:23:20 crc kubenswrapper[4780]: I1205 09:23:20.100296 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pzxs" event={"ID":"c9801add-61f6-4440-9256-fe30e6265c83","Type":"ContainerStarted","Data":"8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac"} Dec 05 09:23:20 crc kubenswrapper[4780]: I1205 09:23:20.135032 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8pzxs" podStartSLOduration=4.668772177 podStartE2EDuration="10.134994829s" podCreationTimestamp="2025-12-05 09:23:10 +0000 UTC" firstStartedPulling="2025-12-05 09:23:14.036965584 +0000 UTC m=+9428.106481916" lastFinishedPulling="2025-12-05 09:23:19.503188236 +0000 UTC m=+9433.572704568" observedRunningTime="2025-12-05 09:23:20.122111001 +0000 UTC m=+9434.191627333" watchObservedRunningTime="2025-12-05 09:23:20.134994829 +0000 UTC m=+9434.204511171" Dec 05 09:23:21 crc kubenswrapper[4780]: I1205 09:23:21.067312 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:21 crc kubenswrapper[4780]: I1205 09:23:21.067369 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:22 crc kubenswrapper[4780]: I1205 09:23:22.411783 4780 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8pzxs" podUID="c9801add-61f6-4440-9256-fe30e6265c83" containerName="registry-server" probeResult="failure" output=< Dec 05 09:23:22 crc kubenswrapper[4780]: timeout: failed to connect service ":50051" within 1s Dec 05 09:23:22 crc kubenswrapper[4780]: > Dec 05 09:23:29 crc kubenswrapper[4780]: I1205 09:23:29.341725 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 05 09:23:31 crc kubenswrapper[4780]: I1205 09:23:31.113586 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:31 crc kubenswrapper[4780]: I1205 09:23:31.163022 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:31 crc kubenswrapper[4780]: I1205 09:23:31.214142 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5d2ce5fa-2138-48bd-9af7-76d136e21dfe","Type":"ContainerStarted","Data":"b96cfe0d1ed92414ac4b5c1c1bfe6e634898b9b368406fee4f79b0ba2a196a83"} Dec 05 09:23:31 crc kubenswrapper[4780]: I1205 09:23:31.231754 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.775094635 podStartE2EDuration="1m4.231737522s" podCreationTimestamp="2025-12-05 09:22:27 +0000 UTC" firstStartedPulling="2025-12-05 09:22:28.882911503 +0000 UTC m=+9382.952427835" lastFinishedPulling="2025-12-05 09:23:29.33955439 +0000 UTC m=+9443.409070722" observedRunningTime="2025-12-05 09:23:31.230548021 +0000 UTC m=+9445.300064343" watchObservedRunningTime="2025-12-05 09:23:31.231737522 +0000 UTC m=+9445.301253854" Dec 05 09:23:31 crc kubenswrapper[4780]: I1205 09:23:31.351324 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8pzxs"] Dec 05 09:23:32 crc kubenswrapper[4780]: I1205 09:23:32.222122 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8pzxs" podUID="c9801add-61f6-4440-9256-fe30e6265c83" containerName="registry-server" containerID="cri-o://8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac" gracePeriod=2 Dec 05 09:23:32 crc kubenswrapper[4780]: I1205 09:23:32.690794 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:32 crc kubenswrapper[4780]: I1205 09:23:32.765760 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m78v6\" (UniqueName: \"kubernetes.io/projected/c9801add-61f6-4440-9256-fe30e6265c83-kube-api-access-m78v6\") pod \"c9801add-61f6-4440-9256-fe30e6265c83\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " Dec 05 09:23:32 crc kubenswrapper[4780]: I1205 09:23:32.766123 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-utilities\") pod \"c9801add-61f6-4440-9256-fe30e6265c83\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " Dec 05 09:23:32 crc kubenswrapper[4780]: I1205 09:23:32.766371 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-catalog-content\") pod \"c9801add-61f6-4440-9256-fe30e6265c83\" (UID: \"c9801add-61f6-4440-9256-fe30e6265c83\") " Dec 05 09:23:32 crc kubenswrapper[4780]: I1205 09:23:32.767490 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-utilities" (OuterVolumeSpecName: "utilities") pod "c9801add-61f6-4440-9256-fe30e6265c83" (UID: "c9801add-61f6-4440-9256-fe30e6265c83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:23:32 crc kubenswrapper[4780]: I1205 09:23:32.868770 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:23:32 crc kubenswrapper[4780]: I1205 09:23:32.886075 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9801add-61f6-4440-9256-fe30e6265c83" (UID: "c9801add-61f6-4440-9256-fe30e6265c83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:23:32 crc kubenswrapper[4780]: I1205 09:23:32.970314 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9801add-61f6-4440-9256-fe30e6265c83-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.232224 4780 generic.go:334] "Generic (PLEG): container finished" podID="c9801add-61f6-4440-9256-fe30e6265c83" containerID="8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac" exitCode=0 Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.232265 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pzxs" event={"ID":"c9801add-61f6-4440-9256-fe30e6265c83","Type":"ContainerDied","Data":"8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac"} Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.232296 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pzxs" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.232321 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pzxs" event={"ID":"c9801add-61f6-4440-9256-fe30e6265c83","Type":"ContainerDied","Data":"5bc86e45e96f9e4951bd97e475058e71fcad414c98a0b5c637f88d5112f75f14"} Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.232346 4780 scope.go:117] "RemoveContainer" containerID="8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.241446 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9801add-61f6-4440-9256-fe30e6265c83-kube-api-access-m78v6" (OuterVolumeSpecName: "kube-api-access-m78v6") pod "c9801add-61f6-4440-9256-fe30e6265c83" (UID: "c9801add-61f6-4440-9256-fe30e6265c83"). InnerVolumeSpecName "kube-api-access-m78v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.257702 4780 scope.go:117] "RemoveContainer" containerID="aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.276604 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m78v6\" (UniqueName: \"kubernetes.io/projected/c9801add-61f6-4440-9256-fe30e6265c83-kube-api-access-m78v6\") on node \"crc\" DevicePath \"\"" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.285143 4780 scope.go:117] "RemoveContainer" containerID="bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.330120 4780 scope.go:117] "RemoveContainer" containerID="8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac" Dec 05 09:23:33 crc kubenswrapper[4780]: E1205 09:23:33.330462 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac\": container with ID starting with 8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac not found: ID does not exist" containerID="8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.330501 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac"} err="failed to get container status \"8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac\": rpc error: code = NotFound desc = could not find container \"8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac\": container with ID starting with 8fb9253608c208098e3fe7fe3d70b47064522e004225c834419dac5819563aac not found: ID does not exist" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.330531 4780 scope.go:117] "RemoveContainer" containerID="aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61" Dec 05 09:23:33 crc kubenswrapper[4780]: E1205 09:23:33.331033 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61\": container with ID starting with aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61 not found: ID does not exist" containerID="aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.331053 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61"} err="failed to get container status \"aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61\": rpc error: code = NotFound desc = could not find container \"aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61\": container with ID starting with aeb0fb61029a7652de9c808cb62d2d8e1d107d81f1138ff911189cd8d9ba6e61 not found: ID does not exist" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.331070 4780 scope.go:117] "RemoveContainer" containerID="bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657" Dec 05 09:23:33 crc kubenswrapper[4780]: E1205 09:23:33.331438 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657\": container with ID starting with bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657 not found: ID does not exist" containerID="bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.331473 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657"} err="failed to get container status \"bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657\": rpc error: code = NotFound desc = could not find container \"bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657\": container with ID starting with bb2bcf1fa87765c4958ea54d7c52d6e9135cd980bfe2b65565fe0e43ada9d657 not found: ID does not exist" Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.584321 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8pzxs"] Dec 05 09:23:33 crc kubenswrapper[4780]: I1205 09:23:33.593508 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8pzxs"] Dec 05 09:23:34 crc kubenswrapper[4780]: I1205 09:23:34.155495 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9801add-61f6-4440-9256-fe30e6265c83" path="/var/lib/kubelet/pods/c9801add-61f6-4440-9256-fe30e6265c83/volumes" Dec 05 09:24:36 crc kubenswrapper[4780]: I1205 09:24:36.350562 4780 scope.go:117] "RemoveContainer" containerID="b7946cc55b86514279d3d1ae332fa032c635db6276c01a403ce71016bd037363" Dec 05 09:24:36 crc kubenswrapper[4780]: I1205 09:24:36.381354 4780 scope.go:117] "RemoveContainer" containerID="888a76d54eacbba61ce407488d5b7936042f6bb88e8d1f400ba02c1d19959d19" Dec 05 09:24:36 crc kubenswrapper[4780]: I1205 09:24:36.436148 4780 scope.go:117] "RemoveContainer" containerID="032cf33b25fb603b0bac33ac466b916af4abae6fffed0018324bf4eafa432a6b" Dec 05 09:24:59 crc kubenswrapper[4780]: I1205 09:24:59.907901 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:24:59 crc kubenswrapper[4780]: I1205 09:24:59.908418 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:25:29 crc kubenswrapper[4780]: I1205 09:25:29.908367 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:25:29 crc kubenswrapper[4780]: I1205 09:25:29.908804 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:25:59 crc kubenswrapper[4780]: I1205 09:25:59.914145 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:25:59 crc kubenswrapper[4780]: I1205 09:25:59.914759 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:25:59 crc kubenswrapper[4780]: I1205 09:25:59.914810 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 09:25:59 crc kubenswrapper[4780]: I1205 09:25:59.917372 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 09:25:59 crc kubenswrapper[4780]: I1205 09:25:59.917457 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" gracePeriod=600 Dec 05 09:26:00 crc kubenswrapper[4780]: E1205 09:26:00.044575 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:26:00 crc kubenswrapper[4780]: I1205 09:26:00.616074 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" exitCode=0 Dec 05 09:26:00 crc kubenswrapper[4780]: I1205 09:26:00.616238 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733"} Dec 05 09:26:00 crc kubenswrapper[4780]: I1205 09:26:00.616534 4780 scope.go:117] "RemoveContainer" containerID="a53dface95dac8488cff4a0130d25a47bde2c1dd141aef593525271abf3e5b09" Dec 05 09:26:00 crc kubenswrapper[4780]: I1205 09:26:00.617299 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:26:00 crc kubenswrapper[4780]: E1205 09:26:00.617611 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:26:12 crc kubenswrapper[4780]: I1205 09:26:12.141098 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:26:12 crc kubenswrapper[4780]: E1205 09:26:12.141997 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:26:24 crc kubenswrapper[4780]: I1205 09:26:24.141296 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:26:24 crc kubenswrapper[4780]: E1205 09:26:24.142035 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:26:39 crc kubenswrapper[4780]: I1205 09:26:39.139283 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:26:39 crc kubenswrapper[4780]: E1205 09:26:39.140141 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:26:53 crc kubenswrapper[4780]: I1205 09:26:53.139372 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:26:53 crc kubenswrapper[4780]: E1205 09:26:53.140218 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:27:08 crc kubenswrapper[4780]: I1205 09:27:08.139748 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:27:08 crc kubenswrapper[4780]: E1205 09:27:08.140990 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:27:20 crc kubenswrapper[4780]: I1205 09:27:20.138884 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:27:20 crc kubenswrapper[4780]: E1205 09:27:20.139728 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:27:35 crc kubenswrapper[4780]: I1205 09:27:35.138857 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:27:35 crc kubenswrapper[4780]: E1205 09:27:35.139652 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:27:48 crc kubenswrapper[4780]: I1205 09:27:48.139510 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:27:48 crc kubenswrapper[4780]: E1205 09:27:48.140640 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:28:00 crc kubenswrapper[4780]: I1205 09:28:00.138802 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:28:00 crc kubenswrapper[4780]: E1205 09:28:00.139772 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:28:11 crc kubenswrapper[4780]: I1205 09:28:11.139336 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:28:11 crc kubenswrapper[4780]: E1205 09:28:11.140075 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:28:23 crc kubenswrapper[4780]: I1205 09:28:23.139191 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:28:23 crc kubenswrapper[4780]: E1205 09:28:23.140050 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.789591 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9d9gh"] Dec 05 09:28:32 crc kubenswrapper[4780]: E1205 09:28:32.791501 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9801add-61f6-4440-9256-fe30e6265c83" containerName="extract-utilities" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.791521 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9801add-61f6-4440-9256-fe30e6265c83" containerName="extract-utilities" Dec 05 09:28:32 crc kubenswrapper[4780]: E1205 09:28:32.791558 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9801add-61f6-4440-9256-fe30e6265c83" containerName="registry-server" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.791566 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9801add-61f6-4440-9256-fe30e6265c83" containerName="registry-server" Dec 05 09:28:32 crc kubenswrapper[4780]: E1205 09:28:32.791576 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9801add-61f6-4440-9256-fe30e6265c83" containerName="extract-content" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.791586 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9801add-61f6-4440-9256-fe30e6265c83" containerName="extract-content" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.791904 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9801add-61f6-4440-9256-fe30e6265c83" containerName="registry-server" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.793780 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.886380 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9d9gh"] Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.895210 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-utilities\") pod \"redhat-marketplace-9d9gh\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.895291 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-catalog-content\") pod \"redhat-marketplace-9d9gh\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.895387 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kct4c\" (UniqueName: \"kubernetes.io/projected/0316d700-a429-4066-b6d7-209efbf928c4-kube-api-access-kct4c\") pod \"redhat-marketplace-9d9gh\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.997560 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kct4c\" (UniqueName: \"kubernetes.io/projected/0316d700-a429-4066-b6d7-209efbf928c4-kube-api-access-kct4c\") pod \"redhat-marketplace-9d9gh\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.997733 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-utilities\") pod \"redhat-marketplace-9d9gh\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.997781 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-catalog-content\") pod \"redhat-marketplace-9d9gh\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.998487 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-utilities\") pod \"redhat-marketplace-9d9gh\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:32 crc kubenswrapper[4780]: I1205 09:28:32.998524 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-catalog-content\") pod \"redhat-marketplace-9d9gh\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:33 crc kubenswrapper[4780]: I1205 09:28:33.018785 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kct4c\" (UniqueName: \"kubernetes.io/projected/0316d700-a429-4066-b6d7-209efbf928c4-kube-api-access-kct4c\") pod \"redhat-marketplace-9d9gh\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:33 crc kubenswrapper[4780]: I1205 09:28:33.119241 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:33 crc kubenswrapper[4780]: I1205 09:28:33.961213 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9d9gh"] Dec 05 09:28:35 crc kubenswrapper[4780]: I1205 09:28:35.113339 4780 generic.go:334] "Generic (PLEG): container finished" podID="0316d700-a429-4066-b6d7-209efbf928c4" containerID="f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b" exitCode=0 Dec 05 09:28:35 crc kubenswrapper[4780]: I1205 09:28:35.113401 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d9gh" event={"ID":"0316d700-a429-4066-b6d7-209efbf928c4","Type":"ContainerDied","Data":"f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b"} Dec 05 09:28:35 crc kubenswrapper[4780]: I1205 09:28:35.113852 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d9gh" event={"ID":"0316d700-a429-4066-b6d7-209efbf928c4","Type":"ContainerStarted","Data":"01af67890b11afc17bd68e50746b7bb10cd8a65a999fc7653f240f3800bedecf"} Dec 05 09:28:35 crc kubenswrapper[4780]: I1205 09:28:35.116063 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 09:28:36 crc kubenswrapper[4780]: I1205 09:28:36.126259 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d9gh" event={"ID":"0316d700-a429-4066-b6d7-209efbf928c4","Type":"ContainerStarted","Data":"07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f"} Dec 05 09:28:36 crc kubenswrapper[4780]: I1205 09:28:36.143764 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:28:36 crc kubenswrapper[4780]: E1205 09:28:36.144045 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:28:37 crc kubenswrapper[4780]: I1205 09:28:37.141017 4780 generic.go:334] "Generic (PLEG): container finished" podID="0316d700-a429-4066-b6d7-209efbf928c4" containerID="07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f" exitCode=0 Dec 05 09:28:37 crc kubenswrapper[4780]: I1205 09:28:37.141205 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d9gh" event={"ID":"0316d700-a429-4066-b6d7-209efbf928c4","Type":"ContainerDied","Data":"07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f"} Dec 05 09:28:38 crc kubenswrapper[4780]: I1205 09:28:38.166904 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d9gh" event={"ID":"0316d700-a429-4066-b6d7-209efbf928c4","Type":"ContainerStarted","Data":"c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96"} Dec 05 09:28:38 crc kubenswrapper[4780]: I1205 09:28:38.194325 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9d9gh" podStartSLOduration=3.771533344 podStartE2EDuration="6.194298001s" podCreationTimestamp="2025-12-05 09:28:32 +0000 UTC" firstStartedPulling="2025-12-05 09:28:35.115801233 +0000 UTC m=+9749.185317565" lastFinishedPulling="2025-12-05 09:28:37.53856589 +0000 UTC m=+9751.608082222" observedRunningTime="2025-12-05 09:28:38.186898282 +0000 UTC m=+9752.256414634" watchObservedRunningTime="2025-12-05 09:28:38.194298001 +0000 UTC m=+9752.263814333" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.163161 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9dbfb"] Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.166747 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.180387 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dbfb"] Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.260569 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87xcz\" (UniqueName: \"kubernetes.io/projected/e5a07333-1514-4825-b844-253c57d70297-kube-api-access-87xcz\") pod \"certified-operators-9dbfb\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.260841 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-catalog-content\") pod \"certified-operators-9dbfb\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.260937 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-utilities\") pod \"certified-operators-9dbfb\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.363188 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87xcz\" (UniqueName: \"kubernetes.io/projected/e5a07333-1514-4825-b844-253c57d70297-kube-api-access-87xcz\") pod \"certified-operators-9dbfb\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.363305 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-catalog-content\") pod \"certified-operators-9dbfb\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.363331 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-utilities\") pod \"certified-operators-9dbfb\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.363990 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-utilities\") pod \"certified-operators-9dbfb\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.364042 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-catalog-content\") pod \"certified-operators-9dbfb\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.387640 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87xcz\" (UniqueName: \"kubernetes.io/projected/e5a07333-1514-4825-b844-253c57d70297-kube-api-access-87xcz\") pod \"certified-operators-9dbfb\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:40 crc kubenswrapper[4780]: I1205 09:28:40.487213 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:41 crc kubenswrapper[4780]: I1205 09:28:41.054427 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dbfb"] Dec 05 09:28:41 crc kubenswrapper[4780]: I1205 09:28:41.198764 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dbfb" event={"ID":"e5a07333-1514-4825-b844-253c57d70297","Type":"ContainerStarted","Data":"d19532013d97ae3d46f20cb38632b4b9a2394c7c79c8d7baffadced4dbd73c17"} Dec 05 09:28:42 crc kubenswrapper[4780]: I1205 09:28:42.212637 4780 generic.go:334] "Generic (PLEG): container finished" podID="e5a07333-1514-4825-b844-253c57d70297" containerID="35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d" exitCode=0 Dec 05 09:28:42 crc kubenswrapper[4780]: I1205 09:28:42.212741 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dbfb" event={"ID":"e5a07333-1514-4825-b844-253c57d70297","Type":"ContainerDied","Data":"35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d"} Dec 05 09:28:43 crc kubenswrapper[4780]: I1205 09:28:43.119841 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:43 crc kubenswrapper[4780]: I1205 09:28:43.120212 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:43 crc kubenswrapper[4780]: I1205 09:28:43.174864 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:43 crc kubenswrapper[4780]: I1205 09:28:43.224460 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dbfb" event={"ID":"e5a07333-1514-4825-b844-253c57d70297","Type":"ContainerStarted","Data":"d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e"} Dec 05 09:28:43 crc kubenswrapper[4780]: I1205 09:28:43.279448 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:45 crc kubenswrapper[4780]: I1205 09:28:45.244125 4780 generic.go:334] "Generic (PLEG): container finished" podID="e5a07333-1514-4825-b844-253c57d70297" containerID="d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e" exitCode=0 Dec 05 09:28:45 crc kubenswrapper[4780]: I1205 09:28:45.244200 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dbfb" event={"ID":"e5a07333-1514-4825-b844-253c57d70297","Type":"ContainerDied","Data":"d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e"} Dec 05 09:28:45 crc kubenswrapper[4780]: I1205 09:28:45.351349 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9d9gh"] Dec 05 09:28:45 crc kubenswrapper[4780]: I1205 09:28:45.351621 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9d9gh" podUID="0316d700-a429-4066-b6d7-209efbf928c4" containerName="registry-server" containerID="cri-o://c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96" gracePeriod=2 Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.249068 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.255179 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dbfb" event={"ID":"e5a07333-1514-4825-b844-253c57d70297","Type":"ContainerStarted","Data":"78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b"} Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.258127 4780 generic.go:334] "Generic (PLEG): container finished" podID="0316d700-a429-4066-b6d7-209efbf928c4" containerID="c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96" exitCode=0 Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.258158 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d9gh" event={"ID":"0316d700-a429-4066-b6d7-209efbf928c4","Type":"ContainerDied","Data":"c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96"} Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.258196 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9d9gh" event={"ID":"0316d700-a429-4066-b6d7-209efbf928c4","Type":"ContainerDied","Data":"01af67890b11afc17bd68e50746b7bb10cd8a65a999fc7653f240f3800bedecf"} Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.258214 4780 scope.go:117] "RemoveContainer" containerID="c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.258213 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9d9gh" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.280951 4780 scope.go:117] "RemoveContainer" containerID="07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.293396 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9dbfb" podStartSLOduration=2.797954588 podStartE2EDuration="6.293253212s" podCreationTimestamp="2025-12-05 09:28:40 +0000 UTC" firstStartedPulling="2025-12-05 09:28:42.215190852 +0000 UTC m=+9756.284707184" lastFinishedPulling="2025-12-05 09:28:45.710489476 +0000 UTC m=+9759.780005808" observedRunningTime="2025-12-05 09:28:46.289552363 +0000 UTC m=+9760.359068695" watchObservedRunningTime="2025-12-05 09:28:46.293253212 +0000 UTC m=+9760.362769544" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.325842 4780 scope.go:117] "RemoveContainer" containerID="f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.368289 4780 scope.go:117] "RemoveContainer" containerID="c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96" Dec 05 09:28:46 crc kubenswrapper[4780]: E1205 09:28:46.369645 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96\": container with ID starting with c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96 not found: ID does not exist" containerID="c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.369696 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96"} err="failed to get container status \"c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96\": rpc error: code = NotFound desc = could not find container \"c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96\": container with ID starting with c5e658b453819c78f63412406bfe3b76e73021b2a71770cecd418b2f841c3b96 not found: ID does not exist" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.369727 4780 scope.go:117] "RemoveContainer" containerID="07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f" Dec 05 09:28:46 crc kubenswrapper[4780]: E1205 09:28:46.370648 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f\": container with ID starting with 07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f not found: ID does not exist" containerID="07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.370699 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f"} err="failed to get container status \"07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f\": rpc error: code = NotFound desc = could not find container \"07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f\": container with ID starting with 07ec13b866767abb2ec248da1a398bd95f1e298a6ecbfcb7f9240b01e3740a9f not found: ID does not exist" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.370734 4780 scope.go:117] "RemoveContainer" containerID="f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b" Dec 05 09:28:46 crc kubenswrapper[4780]: E1205 09:28:46.371062 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b\": container with ID starting with f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b not found: ID does not exist" containerID="f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.371096 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b"} err="failed to get container status \"f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b\": rpc error: code = NotFound desc = could not find container \"f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b\": container with ID starting with f0772f4b07e60eea74e960608cd139b2a8819e773c2e90ca22873831e25bb05b not found: ID does not exist" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.393963 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-catalog-content\") pod \"0316d700-a429-4066-b6d7-209efbf928c4\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.394037 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kct4c\" (UniqueName: \"kubernetes.io/projected/0316d700-a429-4066-b6d7-209efbf928c4-kube-api-access-kct4c\") pod \"0316d700-a429-4066-b6d7-209efbf928c4\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.394139 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-utilities\") pod \"0316d700-a429-4066-b6d7-209efbf928c4\" (UID: \"0316d700-a429-4066-b6d7-209efbf928c4\") " Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.396221 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-utilities" (OuterVolumeSpecName: "utilities") pod "0316d700-a429-4066-b6d7-209efbf928c4" (UID: "0316d700-a429-4066-b6d7-209efbf928c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.404117 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0316d700-a429-4066-b6d7-209efbf928c4-kube-api-access-kct4c" (OuterVolumeSpecName: "kube-api-access-kct4c") pod "0316d700-a429-4066-b6d7-209efbf928c4" (UID: "0316d700-a429-4066-b6d7-209efbf928c4"). InnerVolumeSpecName "kube-api-access-kct4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.416950 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0316d700-a429-4066-b6d7-209efbf928c4" (UID: "0316d700-a429-4066-b6d7-209efbf928c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.496485 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.497114 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kct4c\" (UniqueName: \"kubernetes.io/projected/0316d700-a429-4066-b6d7-209efbf928c4-kube-api-access-kct4c\") on node \"crc\" DevicePath \"\"" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.497211 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0316d700-a429-4066-b6d7-209efbf928c4-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.605364 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9d9gh"] Dec 05 09:28:46 crc kubenswrapper[4780]: I1205 09:28:46.621497 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9d9gh"] Dec 05 09:28:47 crc kubenswrapper[4780]: I1205 09:28:47.138324 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:28:47 crc kubenswrapper[4780]: E1205 09:28:47.138650 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:28:48 crc kubenswrapper[4780]: I1205 09:28:48.150781 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0316d700-a429-4066-b6d7-209efbf928c4" path="/var/lib/kubelet/pods/0316d700-a429-4066-b6d7-209efbf928c4/volumes" Dec 05 09:28:50 crc kubenswrapper[4780]: I1205 09:28:50.488288 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:50 crc kubenswrapper[4780]: I1205 09:28:50.488649 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:50 crc kubenswrapper[4780]: I1205 09:28:50.541100 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:51 crc kubenswrapper[4780]: I1205 09:28:51.361094 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:51 crc kubenswrapper[4780]: I1205 09:28:51.557103 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dbfb"] Dec 05 09:28:53 crc kubenswrapper[4780]: I1205 09:28:53.326805 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9dbfb" podUID="e5a07333-1514-4825-b844-253c57d70297" containerName="registry-server" containerID="cri-o://78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b" gracePeriod=2 Dec 05 09:28:53 crc kubenswrapper[4780]: I1205 09:28:53.920873 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.084295 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87xcz\" (UniqueName: \"kubernetes.io/projected/e5a07333-1514-4825-b844-253c57d70297-kube-api-access-87xcz\") pod \"e5a07333-1514-4825-b844-253c57d70297\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.084333 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-catalog-content\") pod \"e5a07333-1514-4825-b844-253c57d70297\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.084380 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-utilities\") pod \"e5a07333-1514-4825-b844-253c57d70297\" (UID: \"e5a07333-1514-4825-b844-253c57d70297\") " Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.089030 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-utilities" (OuterVolumeSpecName: "utilities") pod "e5a07333-1514-4825-b844-253c57d70297" (UID: "e5a07333-1514-4825-b844-253c57d70297"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.108118 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a07333-1514-4825-b844-253c57d70297-kube-api-access-87xcz" (OuterVolumeSpecName: "kube-api-access-87xcz") pod "e5a07333-1514-4825-b844-253c57d70297" (UID: "e5a07333-1514-4825-b844-253c57d70297"). InnerVolumeSpecName "kube-api-access-87xcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.146556 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5a07333-1514-4825-b844-253c57d70297" (UID: "e5a07333-1514-4825-b844-253c57d70297"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.187373 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.187602 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a07333-1514-4825-b844-253c57d70297-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.187798 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87xcz\" (UniqueName: \"kubernetes.io/projected/e5a07333-1514-4825-b844-253c57d70297-kube-api-access-87xcz\") on node \"crc\" DevicePath \"\"" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.338727 4780 generic.go:334] "Generic (PLEG): container finished" podID="e5a07333-1514-4825-b844-253c57d70297" containerID="78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b" exitCode=0 Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.338768 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dbfb" event={"ID":"e5a07333-1514-4825-b844-253c57d70297","Type":"ContainerDied","Data":"78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b"} Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.339924 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dbfb" event={"ID":"e5a07333-1514-4825-b844-253c57d70297","Type":"ContainerDied","Data":"d19532013d97ae3d46f20cb38632b4b9a2394c7c79c8d7baffadced4dbd73c17"} Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.338819 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dbfb" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.339991 4780 scope.go:117] "RemoveContainer" containerID="78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.368012 4780 scope.go:117] "RemoveContainer" containerID="d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.372065 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dbfb"] Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.387363 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9dbfb"] Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.403132 4780 scope.go:117] "RemoveContainer" containerID="35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.452997 4780 scope.go:117] "RemoveContainer" containerID="78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b" Dec 05 09:28:54 crc kubenswrapper[4780]: E1205 09:28:54.456720 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b\": container with ID starting with 78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b not found: ID does not exist" containerID="78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.456770 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b"} err="failed to get container status \"78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b\": rpc error: code = NotFound desc = could not find container \"78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b\": container with ID starting with 78ea78cf1b42769947e7c6cc567ce2bc3613c15aad412716202b3807be77d52b not found: ID does not exist" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.456804 4780 scope.go:117] "RemoveContainer" containerID="d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e" Dec 05 09:28:54 crc kubenswrapper[4780]: E1205 09:28:54.457067 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e\": container with ID starting with d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e not found: ID does not exist" containerID="d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.457089 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e"} err="failed to get container status \"d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e\": rpc error: code = NotFound desc = could not find container \"d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e\": container with ID starting with d6a52207a53e31dae3b91b99e9fcc249fd245350cfd741b0e930a38f4d39d98e not found: ID does not exist" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.457105 4780 scope.go:117] "RemoveContainer" containerID="35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d" Dec 05 09:28:54 crc kubenswrapper[4780]: E1205 09:28:54.457296 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d\": container with ID starting with 35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d not found: ID does not exist" containerID="35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d" Dec 05 09:28:54 crc kubenswrapper[4780]: I1205 09:28:54.457319 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d"} err="failed to get container status \"35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d\": rpc error: code = NotFound desc = could not find container \"35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d\": container with ID starting with 35f7751574741355d47df8cc2dad594779e1a7a948c7cb11308a13a035298b3d not found: ID does not exist" Dec 05 09:28:56 crc kubenswrapper[4780]: I1205 09:28:56.150302 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a07333-1514-4825-b844-253c57d70297" path="/var/lib/kubelet/pods/e5a07333-1514-4825-b844-253c57d70297/volumes" Dec 05 09:28:59 crc kubenswrapper[4780]: I1205 09:28:59.138828 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:28:59 crc kubenswrapper[4780]: E1205 09:28:59.139668 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:29:11 crc kubenswrapper[4780]: I1205 09:29:11.139037 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:29:11 crc kubenswrapper[4780]: E1205 09:29:11.139821 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:29:24 crc kubenswrapper[4780]: I1205 09:29:24.138498 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:29:24 crc kubenswrapper[4780]: E1205 09:29:24.139468 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:29:39 crc kubenswrapper[4780]: I1205 09:29:39.139329 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:29:39 crc kubenswrapper[4780]: E1205 09:29:39.140411 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.812319 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mrzp"] Dec 05 09:29:40 crc kubenswrapper[4780]: E1205 09:29:40.813068 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a07333-1514-4825-b844-253c57d70297" containerName="extract-content" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.813083 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a07333-1514-4825-b844-253c57d70297" containerName="extract-content" Dec 05 09:29:40 crc kubenswrapper[4780]: E1205 09:29:40.813099 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0316d700-a429-4066-b6d7-209efbf928c4" containerName="extract-content" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.813106 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0316d700-a429-4066-b6d7-209efbf928c4" containerName="extract-content" Dec 05 09:29:40 crc kubenswrapper[4780]: E1205 09:29:40.813116 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a07333-1514-4825-b844-253c57d70297" containerName="registry-server" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.813133 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a07333-1514-4825-b844-253c57d70297" containerName="registry-server" Dec 05 09:29:40 crc kubenswrapper[4780]: E1205 09:29:40.813151 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0316d700-a429-4066-b6d7-209efbf928c4" containerName="registry-server" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.813157 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0316d700-a429-4066-b6d7-209efbf928c4" containerName="registry-server" Dec 05 09:29:40 crc kubenswrapper[4780]: E1205 09:29:40.813168 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0316d700-a429-4066-b6d7-209efbf928c4" containerName="extract-utilities" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.813174 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="0316d700-a429-4066-b6d7-209efbf928c4" containerName="extract-utilities" Dec 05 09:29:40 crc kubenswrapper[4780]: E1205 09:29:40.813192 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a07333-1514-4825-b844-253c57d70297" containerName="extract-utilities" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.813198 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a07333-1514-4825-b844-253c57d70297" containerName="extract-utilities" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.813413 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="0316d700-a429-4066-b6d7-209efbf928c4" containerName="registry-server" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.813434 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a07333-1514-4825-b844-253c57d70297" containerName="registry-server" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.814902 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.819432 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbsms\" (UniqueName: \"kubernetes.io/projected/c53ef719-1c63-422c-a57b-23fb3ff8caa6-kube-api-access-nbsms\") pod \"community-operators-4mrzp\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.819480 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-catalog-content\") pod \"community-operators-4mrzp\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.819555 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-utilities\") pod \"community-operators-4mrzp\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.826091 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mrzp"] Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.921677 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbsms\" (UniqueName: \"kubernetes.io/projected/c53ef719-1c63-422c-a57b-23fb3ff8caa6-kube-api-access-nbsms\") pod \"community-operators-4mrzp\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.921749 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-catalog-content\") pod \"community-operators-4mrzp\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.921833 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-utilities\") pod \"community-operators-4mrzp\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.922269 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-catalog-content\") pod \"community-operators-4mrzp\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.922312 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-utilities\") pod \"community-operators-4mrzp\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:40 crc kubenswrapper[4780]: I1205 09:29:40.942613 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbsms\" (UniqueName: \"kubernetes.io/projected/c53ef719-1c63-422c-a57b-23fb3ff8caa6-kube-api-access-nbsms\") pod \"community-operators-4mrzp\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:41 crc kubenswrapper[4780]: I1205 09:29:41.146410 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:41 crc kubenswrapper[4780]: I1205 09:29:41.655323 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mrzp"] Dec 05 09:29:41 crc kubenswrapper[4780]: I1205 09:29:41.790090 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mrzp" event={"ID":"c53ef719-1c63-422c-a57b-23fb3ff8caa6","Type":"ContainerStarted","Data":"8b647fd021438d5853c85a2b996a3ed0531c53a8ea1cb4fae587eab1eecd8b2e"} Dec 05 09:29:42 crc kubenswrapper[4780]: I1205 09:29:42.803776 4780 generic.go:334] "Generic (PLEG): container finished" podID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerID="fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb" exitCode=0 Dec 05 09:29:42 crc kubenswrapper[4780]: I1205 09:29:42.804086 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mrzp" event={"ID":"c53ef719-1c63-422c-a57b-23fb3ff8caa6","Type":"ContainerDied","Data":"fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb"} Dec 05 09:29:43 crc kubenswrapper[4780]: I1205 09:29:43.815261 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mrzp" event={"ID":"c53ef719-1c63-422c-a57b-23fb3ff8caa6","Type":"ContainerStarted","Data":"0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1"} Dec 05 09:29:44 crc kubenswrapper[4780]: I1205 09:29:44.834342 4780 generic.go:334] "Generic (PLEG): container finished" podID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerID="0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1" exitCode=0 Dec 05 09:29:44 crc kubenswrapper[4780]: I1205 09:29:44.834406 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mrzp" event={"ID":"c53ef719-1c63-422c-a57b-23fb3ff8caa6","Type":"ContainerDied","Data":"0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1"} Dec 05 09:29:45 crc kubenswrapper[4780]: I1205 09:29:45.847401 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mrzp" event={"ID":"c53ef719-1c63-422c-a57b-23fb3ff8caa6","Type":"ContainerStarted","Data":"c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56"} Dec 05 09:29:51 crc kubenswrapper[4780]: I1205 09:29:51.146708 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:51 crc kubenswrapper[4780]: I1205 09:29:51.147248 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:51 crc kubenswrapper[4780]: I1205 09:29:51.199095 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:51 crc kubenswrapper[4780]: I1205 09:29:51.226144 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mrzp" podStartSLOduration=8.827186068 podStartE2EDuration="11.226125074s" podCreationTimestamp="2025-12-05 09:29:40 +0000 UTC" firstStartedPulling="2025-12-05 09:29:42.876004267 +0000 UTC m=+9816.945520599" lastFinishedPulling="2025-12-05 09:29:45.274943273 +0000 UTC m=+9819.344459605" observedRunningTime="2025-12-05 09:29:45.875652602 +0000 UTC m=+9819.945168944" watchObservedRunningTime="2025-12-05 09:29:51.226125074 +0000 UTC m=+9825.295641406" Dec 05 09:29:51 crc kubenswrapper[4780]: I1205 09:29:51.947228 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:51 crc kubenswrapper[4780]: I1205 09:29:51.998530 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mrzp"] Dec 05 09:29:52 crc kubenswrapper[4780]: I1205 09:29:52.140017 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:29:52 crc kubenswrapper[4780]: E1205 09:29:52.140311 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:29:53 crc kubenswrapper[4780]: I1205 09:29:53.926176 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4mrzp" podUID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerName="registry-server" containerID="cri-o://c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56" gracePeriod=2 Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.591155 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.711322 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbsms\" (UniqueName: \"kubernetes.io/projected/c53ef719-1c63-422c-a57b-23fb3ff8caa6-kube-api-access-nbsms\") pod \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.711620 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-catalog-content\") pod \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.711975 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-utilities\") pod \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\" (UID: \"c53ef719-1c63-422c-a57b-23fb3ff8caa6\") " Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.712969 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-utilities" (OuterVolumeSpecName: "utilities") pod "c53ef719-1c63-422c-a57b-23fb3ff8caa6" (UID: "c53ef719-1c63-422c-a57b-23fb3ff8caa6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.718453 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53ef719-1c63-422c-a57b-23fb3ff8caa6-kube-api-access-nbsms" (OuterVolumeSpecName: "kube-api-access-nbsms") pod "c53ef719-1c63-422c-a57b-23fb3ff8caa6" (UID: "c53ef719-1c63-422c-a57b-23fb3ff8caa6"). InnerVolumeSpecName "kube-api-access-nbsms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.768615 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c53ef719-1c63-422c-a57b-23fb3ff8caa6" (UID: "c53ef719-1c63-422c-a57b-23fb3ff8caa6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.814527 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbsms\" (UniqueName: \"kubernetes.io/projected/c53ef719-1c63-422c-a57b-23fb3ff8caa6-kube-api-access-nbsms\") on node \"crc\" DevicePath \"\"" Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.814567 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.814576 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c53ef719-1c63-422c-a57b-23fb3ff8caa6-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.946416 4780 generic.go:334] "Generic (PLEG): container finished" podID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerID="c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56" exitCode=0 Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.946513 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mrzp" Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.946524 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mrzp" event={"ID":"c53ef719-1c63-422c-a57b-23fb3ff8caa6","Type":"ContainerDied","Data":"c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56"} Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.946930 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mrzp" event={"ID":"c53ef719-1c63-422c-a57b-23fb3ff8caa6","Type":"ContainerDied","Data":"8b647fd021438d5853c85a2b996a3ed0531c53a8ea1cb4fae587eab1eecd8b2e"} Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.946965 4780 scope.go:117] "RemoveContainer" containerID="c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56" Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.981982 4780 scope.go:117] "RemoveContainer" containerID="0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1" Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.985800 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mrzp"] Dec 05 09:29:54 crc kubenswrapper[4780]: I1205 09:29:54.995476 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4mrzp"] Dec 05 09:29:55 crc kubenswrapper[4780]: I1205 09:29:55.007969 4780 scope.go:117] "RemoveContainer" containerID="fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb" Dec 05 09:29:55 crc kubenswrapper[4780]: I1205 09:29:55.054394 4780 scope.go:117] "RemoveContainer" containerID="c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56" Dec 05 09:29:55 crc kubenswrapper[4780]: E1205 09:29:55.054869 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56\": container with ID starting with c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56 not found: ID does not exist" containerID="c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56" Dec 05 09:29:55 crc kubenswrapper[4780]: I1205 09:29:55.054915 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56"} err="failed to get container status \"c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56\": rpc error: code = NotFound desc = could not find container \"c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56\": container with ID starting with c2bd79d09c1202ca57d3ab8edb71b7456314cc9ba77bbd0d828071d7ddfc5d56 not found: ID does not exist" Dec 05 09:29:55 crc kubenswrapper[4780]: I1205 09:29:55.054937 4780 scope.go:117] "RemoveContainer" containerID="0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1" Dec 05 09:29:55 crc kubenswrapper[4780]: E1205 09:29:55.055459 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1\": container with ID starting with 0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1 not found: ID does not exist" containerID="0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1" Dec 05 09:29:55 crc kubenswrapper[4780]: I1205 09:29:55.055501 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1"} err="failed to get container status \"0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1\": rpc error: code = NotFound desc = could not find container \"0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1\": container with ID starting with 0b30f9bf4be63ae7b95fdf0720cba15150434913936b1239356d275e94f150c1 not found: ID does not exist" Dec 05 09:29:55 crc kubenswrapper[4780]: I1205 09:29:55.055530 4780 scope.go:117] "RemoveContainer" containerID="fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb" Dec 05 09:29:55 crc kubenswrapper[4780]: E1205 09:29:55.055815 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb\": container with ID starting with fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb not found: ID does not exist" containerID="fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb" Dec 05 09:29:55 crc kubenswrapper[4780]: I1205 09:29:55.055840 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb"} err="failed to get container status \"fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb\": rpc error: code = NotFound desc = could not find container \"fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb\": container with ID starting with fc41ce4c66e3cd1b3378b4cc9dc8aa8cbf4f9da1b2aabf4d1309bafe250876fb not found: ID does not exist" Dec 05 09:29:56 crc kubenswrapper[4780]: I1205 09:29:56.156079 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" path="/var/lib/kubelet/pods/c53ef719-1c63-422c-a57b-23fb3ff8caa6/volumes" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.168084 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd"] Dec 05 09:30:00 crc kubenswrapper[4780]: E1205 09:30:00.169084 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerName="extract-utilities" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.169100 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerName="extract-utilities" Dec 05 09:30:00 crc kubenswrapper[4780]: E1205 09:30:00.169124 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerName="registry-server" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.169130 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerName="registry-server" Dec 05 09:30:00 crc kubenswrapper[4780]: E1205 09:30:00.169152 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerName="extract-content" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.169158 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerName="extract-content" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.169401 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53ef719-1c63-422c-a57b-23fb3ff8caa6" containerName="registry-server" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.170195 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.176384 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.176520 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.180420 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd"] Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.324749 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37bcd0d5-851f-424b-b305-2761e8916835-secret-volume\") pod \"collect-profiles-29415450-5xhhd\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.325161 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cshk\" (UniqueName: \"kubernetes.io/projected/37bcd0d5-851f-424b-b305-2761e8916835-kube-api-access-9cshk\") pod \"collect-profiles-29415450-5xhhd\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.325336 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37bcd0d5-851f-424b-b305-2761e8916835-config-volume\") pod \"collect-profiles-29415450-5xhhd\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.426903 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cshk\" (UniqueName: \"kubernetes.io/projected/37bcd0d5-851f-424b-b305-2761e8916835-kube-api-access-9cshk\") pod \"collect-profiles-29415450-5xhhd\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.426998 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37bcd0d5-851f-424b-b305-2761e8916835-config-volume\") pod \"collect-profiles-29415450-5xhhd\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.427056 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37bcd0d5-851f-424b-b305-2761e8916835-secret-volume\") pod \"collect-profiles-29415450-5xhhd\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.428067 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37bcd0d5-851f-424b-b305-2761e8916835-config-volume\") pod \"collect-profiles-29415450-5xhhd\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.565103 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37bcd0d5-851f-424b-b305-2761e8916835-secret-volume\") pod \"collect-profiles-29415450-5xhhd\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.572816 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cshk\" (UniqueName: \"kubernetes.io/projected/37bcd0d5-851f-424b-b305-2761e8916835-kube-api-access-9cshk\") pod \"collect-profiles-29415450-5xhhd\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:00 crc kubenswrapper[4780]: I1205 09:30:00.797548 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:01 crc kubenswrapper[4780]: I1205 09:30:01.339781 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd"] Dec 05 09:30:02 crc kubenswrapper[4780]: I1205 09:30:02.033133 4780 generic.go:334] "Generic (PLEG): container finished" podID="37bcd0d5-851f-424b-b305-2761e8916835" containerID="ee600a8dabe96ab122ded07b9d1bb374eef4433256bca653e467596e9d64910c" exitCode=0 Dec 05 09:30:02 crc kubenswrapper[4780]: I1205 09:30:02.033477 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" event={"ID":"37bcd0d5-851f-424b-b305-2761e8916835","Type":"ContainerDied","Data":"ee600a8dabe96ab122ded07b9d1bb374eef4433256bca653e467596e9d64910c"} Dec 05 09:30:02 crc kubenswrapper[4780]: I1205 09:30:02.033504 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" event={"ID":"37bcd0d5-851f-424b-b305-2761e8916835","Type":"ContainerStarted","Data":"e349baf97f1cb4d6a46198371a31d4437481f824117fbd56273f5c0c90dbdac3"} Dec 05 09:30:03 crc kubenswrapper[4780]: I1205 09:30:03.548700 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:03 crc kubenswrapper[4780]: I1205 09:30:03.602170 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37bcd0d5-851f-424b-b305-2761e8916835-secret-volume\") pod \"37bcd0d5-851f-424b-b305-2761e8916835\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " Dec 05 09:30:03 crc kubenswrapper[4780]: I1205 09:30:03.602224 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cshk\" (UniqueName: \"kubernetes.io/projected/37bcd0d5-851f-424b-b305-2761e8916835-kube-api-access-9cshk\") pod \"37bcd0d5-851f-424b-b305-2761e8916835\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " Dec 05 09:30:03 crc kubenswrapper[4780]: I1205 09:30:03.602594 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37bcd0d5-851f-424b-b305-2761e8916835-config-volume\") pod \"37bcd0d5-851f-424b-b305-2761e8916835\" (UID: \"37bcd0d5-851f-424b-b305-2761e8916835\") " Dec 05 09:30:03 crc kubenswrapper[4780]: I1205 09:30:03.603569 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37bcd0d5-851f-424b-b305-2761e8916835-config-volume" (OuterVolumeSpecName: "config-volume") pod "37bcd0d5-851f-424b-b305-2761e8916835" (UID: "37bcd0d5-851f-424b-b305-2761e8916835"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 09:30:03 crc kubenswrapper[4780]: I1205 09:30:03.611064 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37bcd0d5-851f-424b-b305-2761e8916835-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37bcd0d5-851f-424b-b305-2761e8916835" (UID: "37bcd0d5-851f-424b-b305-2761e8916835"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:30:03 crc kubenswrapper[4780]: I1205 09:30:03.622944 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37bcd0d5-851f-424b-b305-2761e8916835-kube-api-access-9cshk" (OuterVolumeSpecName: "kube-api-access-9cshk") pod "37bcd0d5-851f-424b-b305-2761e8916835" (UID: "37bcd0d5-851f-424b-b305-2761e8916835"). InnerVolumeSpecName "kube-api-access-9cshk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:30:03 crc kubenswrapper[4780]: I1205 09:30:03.704816 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37bcd0d5-851f-424b-b305-2761e8916835-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 09:30:03 crc kubenswrapper[4780]: I1205 09:30:03.704857 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cshk\" (UniqueName: \"kubernetes.io/projected/37bcd0d5-851f-424b-b305-2761e8916835-kube-api-access-9cshk\") on node \"crc\" DevicePath \"\"" Dec 05 09:30:03 crc kubenswrapper[4780]: I1205 09:30:03.704869 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37bcd0d5-851f-424b-b305-2761e8916835-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 09:30:04 crc kubenswrapper[4780]: I1205 09:30:04.055213 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" event={"ID":"37bcd0d5-851f-424b-b305-2761e8916835","Type":"ContainerDied","Data":"e349baf97f1cb4d6a46198371a31d4437481f824117fbd56273f5c0c90dbdac3"} Dec 05 09:30:04 crc kubenswrapper[4780]: I1205 09:30:04.055558 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e349baf97f1cb4d6a46198371a31d4437481f824117fbd56273f5c0c90dbdac3" Dec 05 09:30:04 crc kubenswrapper[4780]: I1205 09:30:04.055273 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415450-5xhhd" Dec 05 09:30:04 crc kubenswrapper[4780]: I1205 09:30:04.139809 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:30:04 crc kubenswrapper[4780]: E1205 09:30:04.140244 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:30:04 crc kubenswrapper[4780]: I1205 09:30:04.735954 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg"] Dec 05 09:30:04 crc kubenswrapper[4780]: I1205 09:30:04.746523 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415405-s6tgg"] Dec 05 09:30:06 crc kubenswrapper[4780]: I1205 09:30:06.166514 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c53a41-5485-4ba5-b4d8-c612ba293495" path="/var/lib/kubelet/pods/b9c53a41-5485-4ba5-b4d8-c612ba293495/volumes" Dec 05 09:30:19 crc kubenswrapper[4780]: I1205 09:30:19.139680 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:30:19 crc kubenswrapper[4780]: E1205 09:30:19.140386 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:30:33 crc kubenswrapper[4780]: I1205 09:30:33.140272 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:30:33 crc kubenswrapper[4780]: E1205 09:30:33.140965 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:30:36 crc kubenswrapper[4780]: I1205 09:30:36.647852 4780 scope.go:117] "RemoveContainer" containerID="954fa4708dd177b493a052477a5ce9bb9cccd33e76b304be847f66977d8c73ea" Dec 05 09:30:46 crc kubenswrapper[4780]: I1205 09:30:46.147269 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:30:46 crc kubenswrapper[4780]: E1205 09:30:46.150051 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:30:58 crc kubenswrapper[4780]: I1205 09:30:58.139350 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:30:58 crc kubenswrapper[4780]: E1205 09:30:58.140214 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:31:13 crc kubenswrapper[4780]: I1205 09:31:13.140189 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:31:13 crc kubenswrapper[4780]: I1205 09:31:13.727568 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"ef6939699043fa89441f96fc078c37f2274134814a74fc97ac43e79e06e189f1"} Dec 05 09:33:20 crc kubenswrapper[4780]: I1205 09:33:20.994505 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jl769"] Dec 05 09:33:20 crc kubenswrapper[4780]: E1205 09:33:20.995638 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37bcd0d5-851f-424b-b305-2761e8916835" containerName="collect-profiles" Dec 05 09:33:20 crc kubenswrapper[4780]: I1205 09:33:20.995674 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="37bcd0d5-851f-424b-b305-2761e8916835" containerName="collect-profiles" Dec 05 09:33:20 crc kubenswrapper[4780]: I1205 09:33:20.996005 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="37bcd0d5-851f-424b-b305-2761e8916835" containerName="collect-profiles" Dec 05 09:33:20 crc kubenswrapper[4780]: I1205 09:33:20.997817 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.012009 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jl769"] Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.106602 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-utilities\") pod \"redhat-operators-jl769\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.106659 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb8s9\" (UniqueName: \"kubernetes.io/projected/f4e5db5a-70af-4022-94a9-828b18215a1f-kube-api-access-xb8s9\") pod \"redhat-operators-jl769\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.106719 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-catalog-content\") pod \"redhat-operators-jl769\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.208996 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb8s9\" (UniqueName: \"kubernetes.io/projected/f4e5db5a-70af-4022-94a9-828b18215a1f-kube-api-access-xb8s9\") pod \"redhat-operators-jl769\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.209405 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-catalog-content\") pod \"redhat-operators-jl769\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.209710 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-utilities\") pod \"redhat-operators-jl769\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.209975 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-catalog-content\") pod \"redhat-operators-jl769\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.210241 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-utilities\") pod \"redhat-operators-jl769\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.231977 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb8s9\" (UniqueName: \"kubernetes.io/projected/f4e5db5a-70af-4022-94a9-828b18215a1f-kube-api-access-xb8s9\") pod \"redhat-operators-jl769\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.319917 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:21 crc kubenswrapper[4780]: I1205 09:33:21.811477 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jl769"] Dec 05 09:33:21 crc kubenswrapper[4780]: W1205 09:33:21.830429 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4e5db5a_70af_4022_94a9_828b18215a1f.slice/crio-6214ca530d97cd4166d5b9084191ee9a46df2d12a12cd5038428d389d5f43241 WatchSource:0}: Error finding container 6214ca530d97cd4166d5b9084191ee9a46df2d12a12cd5038428d389d5f43241: Status 404 returned error can't find the container with id 6214ca530d97cd4166d5b9084191ee9a46df2d12a12cd5038428d389d5f43241 Dec 05 09:33:22 crc kubenswrapper[4780]: I1205 09:33:22.095359 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerID="860d1f390c163a9ea00d78f6f0fc1b47cc457d96acb86cebd3c8fc23c6227e84" exitCode=0 Dec 05 09:33:22 crc kubenswrapper[4780]: I1205 09:33:22.095428 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl769" event={"ID":"f4e5db5a-70af-4022-94a9-828b18215a1f","Type":"ContainerDied","Data":"860d1f390c163a9ea00d78f6f0fc1b47cc457d96acb86cebd3c8fc23c6227e84"} Dec 05 09:33:22 crc kubenswrapper[4780]: I1205 09:33:22.095456 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl769" event={"ID":"f4e5db5a-70af-4022-94a9-828b18215a1f","Type":"ContainerStarted","Data":"6214ca530d97cd4166d5b9084191ee9a46df2d12a12cd5038428d389d5f43241"} Dec 05 09:33:23 crc kubenswrapper[4780]: I1205 09:33:23.107058 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl769" event={"ID":"f4e5db5a-70af-4022-94a9-828b18215a1f","Type":"ContainerStarted","Data":"6910a166e89124a8642119f189df7631bee040903e5f7b482176b72d2fc1cf6a"} Dec 05 09:33:24 crc kubenswrapper[4780]: E1205 09:33:24.781622 4780 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4e5db5a_70af_4022_94a9_828b18215a1f.slice/crio-conmon-6910a166e89124a8642119f189df7631bee040903e5f7b482176b72d2fc1cf6a.scope\": RecentStats: unable to find data in memory cache]" Dec 05 09:33:25 crc kubenswrapper[4780]: I1205 09:33:25.126057 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerID="6910a166e89124a8642119f189df7631bee040903e5f7b482176b72d2fc1cf6a" exitCode=0 Dec 05 09:33:25 crc kubenswrapper[4780]: I1205 09:33:25.126110 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl769" event={"ID":"f4e5db5a-70af-4022-94a9-828b18215a1f","Type":"ContainerDied","Data":"6910a166e89124a8642119f189df7631bee040903e5f7b482176b72d2fc1cf6a"} Dec 05 09:33:26 crc kubenswrapper[4780]: I1205 09:33:26.152970 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl769" event={"ID":"f4e5db5a-70af-4022-94a9-828b18215a1f","Type":"ContainerStarted","Data":"200684fea3d3e757816dac0d55a4a6173658b3c959177cbee710a1dec8433a5b"} Dec 05 09:33:26 crc kubenswrapper[4780]: I1205 09:33:26.171445 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jl769" podStartSLOduration=2.743273417 podStartE2EDuration="6.171426523s" podCreationTimestamp="2025-12-05 09:33:20 +0000 UTC" firstStartedPulling="2025-12-05 09:33:22.096818354 +0000 UTC m=+10036.166334686" lastFinishedPulling="2025-12-05 09:33:25.52497146 +0000 UTC m=+10039.594487792" observedRunningTime="2025-12-05 09:33:26.170699002 +0000 UTC m=+10040.240215344" watchObservedRunningTime="2025-12-05 09:33:26.171426523 +0000 UTC m=+10040.240942855" Dec 05 09:33:29 crc kubenswrapper[4780]: I1205 09:33:29.908386 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:33:29 crc kubenswrapper[4780]: I1205 09:33:29.908788 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:33:31 crc kubenswrapper[4780]: I1205 09:33:31.321094 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:31 crc kubenswrapper[4780]: I1205 09:33:31.321416 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:31 crc kubenswrapper[4780]: I1205 09:33:31.374815 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:32 crc kubenswrapper[4780]: I1205 09:33:32.248739 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:32 crc kubenswrapper[4780]: I1205 09:33:32.295460 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jl769"] Dec 05 09:33:34 crc kubenswrapper[4780]: I1205 09:33:34.213014 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jl769" podUID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerName="registry-server" containerID="cri-o://200684fea3d3e757816dac0d55a4a6173658b3c959177cbee710a1dec8433a5b" gracePeriod=2 Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.228452 4780 generic.go:334] "Generic (PLEG): container finished" podID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerID="200684fea3d3e757816dac0d55a4a6173658b3c959177cbee710a1dec8433a5b" exitCode=0 Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.228515 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl769" event={"ID":"f4e5db5a-70af-4022-94a9-828b18215a1f","Type":"ContainerDied","Data":"200684fea3d3e757816dac0d55a4a6173658b3c959177cbee710a1dec8433a5b"} Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.345754 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.425827 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-utilities\") pod \"f4e5db5a-70af-4022-94a9-828b18215a1f\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.425898 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-catalog-content\") pod \"f4e5db5a-70af-4022-94a9-828b18215a1f\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.425950 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb8s9\" (UniqueName: \"kubernetes.io/projected/f4e5db5a-70af-4022-94a9-828b18215a1f-kube-api-access-xb8s9\") pod \"f4e5db5a-70af-4022-94a9-828b18215a1f\" (UID: \"f4e5db5a-70af-4022-94a9-828b18215a1f\") " Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.426911 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-utilities" (OuterVolumeSpecName: "utilities") pod "f4e5db5a-70af-4022-94a9-828b18215a1f" (UID: "f4e5db5a-70af-4022-94a9-828b18215a1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.441785 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e5db5a-70af-4022-94a9-828b18215a1f-kube-api-access-xb8s9" (OuterVolumeSpecName: "kube-api-access-xb8s9") pod "f4e5db5a-70af-4022-94a9-828b18215a1f" (UID: "f4e5db5a-70af-4022-94a9-828b18215a1f"). InnerVolumeSpecName "kube-api-access-xb8s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.528827 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.528859 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb8s9\" (UniqueName: \"kubernetes.io/projected/f4e5db5a-70af-4022-94a9-828b18215a1f-kube-api-access-xb8s9\") on node \"crc\" DevicePath \"\"" Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.542706 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4e5db5a-70af-4022-94a9-828b18215a1f" (UID: "f4e5db5a-70af-4022-94a9-828b18215a1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:33:35 crc kubenswrapper[4780]: I1205 09:33:35.631051 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4e5db5a-70af-4022-94a9-828b18215a1f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:33:36 crc kubenswrapper[4780]: I1205 09:33:36.239837 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jl769" event={"ID":"f4e5db5a-70af-4022-94a9-828b18215a1f","Type":"ContainerDied","Data":"6214ca530d97cd4166d5b9084191ee9a46df2d12a12cd5038428d389d5f43241"} Dec 05 09:33:36 crc kubenswrapper[4780]: I1205 09:33:36.240199 4780 scope.go:117] "RemoveContainer" containerID="200684fea3d3e757816dac0d55a4a6173658b3c959177cbee710a1dec8433a5b" Dec 05 09:33:36 crc kubenswrapper[4780]: I1205 09:33:36.240109 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jl769" Dec 05 09:33:36 crc kubenswrapper[4780]: I1205 09:33:36.265284 4780 scope.go:117] "RemoveContainer" containerID="6910a166e89124a8642119f189df7631bee040903e5f7b482176b72d2fc1cf6a" Dec 05 09:33:36 crc kubenswrapper[4780]: I1205 09:33:36.269813 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jl769"] Dec 05 09:33:36 crc kubenswrapper[4780]: I1205 09:33:36.287411 4780 scope.go:117] "RemoveContainer" containerID="860d1f390c163a9ea00d78f6f0fc1b47cc457d96acb86cebd3c8fc23c6227e84" Dec 05 09:33:36 crc kubenswrapper[4780]: I1205 09:33:36.288542 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jl769"] Dec 05 09:33:38 crc kubenswrapper[4780]: I1205 09:33:38.149542 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e5db5a-70af-4022-94a9-828b18215a1f" path="/var/lib/kubelet/pods/f4e5db5a-70af-4022-94a9-828b18215a1f/volumes" Dec 05 09:33:59 crc kubenswrapper[4780]: I1205 09:33:59.908040 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:33:59 crc kubenswrapper[4780]: I1205 09:33:59.908662 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:34:10 crc kubenswrapper[4780]: I1205 09:34:10.567971 4780 generic.go:334] "Generic (PLEG): container finished" podID="5d2ce5fa-2138-48bd-9af7-76d136e21dfe" containerID="b96cfe0d1ed92414ac4b5c1c1bfe6e634898b9b368406fee4f79b0ba2a196a83" exitCode=0 Dec 05 09:34:10 crc kubenswrapper[4780]: I1205 09:34:10.568062 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5d2ce5fa-2138-48bd-9af7-76d136e21dfe","Type":"ContainerDied","Data":"b96cfe0d1ed92414ac4b5c1c1bfe6e634898b9b368406fee4f79b0ba2a196a83"} Dec 05 09:34:11 crc kubenswrapper[4780]: I1205 09:34:11.982774 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.083745 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config\") pod \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.084082 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.084152 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnmzn\" (UniqueName: \"kubernetes.io/projected/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-kube-api-access-xnmzn\") pod \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.084193 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-workdir\") pod \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.084229 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ca-certs\") pod \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.084247 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ssh-key\") pod \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.084267 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-config-data\") pod \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.084329 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-temporary\") pod \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.084371 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config-secret\") pod \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\" (UID: \"5d2ce5fa-2138-48bd-9af7-76d136e21dfe\") " Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.084988 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5d2ce5fa-2138-48bd-9af7-76d136e21dfe" (UID: "5d2ce5fa-2138-48bd-9af7-76d136e21dfe"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.085098 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-config-data" (OuterVolumeSpecName: "config-data") pod "5d2ce5fa-2138-48bd-9af7-76d136e21dfe" (UID: "5d2ce5fa-2138-48bd-9af7-76d136e21dfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.085319 4780 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.085341 4780 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.089145 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5d2ce5fa-2138-48bd-9af7-76d136e21dfe" (UID: "5d2ce5fa-2138-48bd-9af7-76d136e21dfe"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.090838 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-kube-api-access-xnmzn" (OuterVolumeSpecName: "kube-api-access-xnmzn") pod "5d2ce5fa-2138-48bd-9af7-76d136e21dfe" (UID: "5d2ce5fa-2138-48bd-9af7-76d136e21dfe"). InnerVolumeSpecName "kube-api-access-xnmzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.090997 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5d2ce5fa-2138-48bd-9af7-76d136e21dfe" (UID: "5d2ce5fa-2138-48bd-9af7-76d136e21dfe"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.113332 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5d2ce5fa-2138-48bd-9af7-76d136e21dfe" (UID: "5d2ce5fa-2138-48bd-9af7-76d136e21dfe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.114021 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5d2ce5fa-2138-48bd-9af7-76d136e21dfe" (UID: "5d2ce5fa-2138-48bd-9af7-76d136e21dfe"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.115460 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5d2ce5fa-2138-48bd-9af7-76d136e21dfe" (UID: "5d2ce5fa-2138-48bd-9af7-76d136e21dfe"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.136485 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5d2ce5fa-2138-48bd-9af7-76d136e21dfe" (UID: "5d2ce5fa-2138-48bd-9af7-76d136e21dfe"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.189756 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.189808 4780 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.189822 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnmzn\" (UniqueName: \"kubernetes.io/projected/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-kube-api-access-xnmzn\") on node \"crc\" DevicePath \"\"" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.189832 4780 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.189842 4780 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.189960 4780 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.189973 4780 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5d2ce5fa-2138-48bd-9af7-76d136e21dfe-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.212604 4780 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.291707 4780 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.585918 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5d2ce5fa-2138-48bd-9af7-76d136e21dfe","Type":"ContainerDied","Data":"48f629d7951386eea32ea2404e3b6ba624f5954571799b81e79177cec6fe108d"} Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.585959 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f629d7951386eea32ea2404e3b6ba624f5954571799b81e79177cec6fe108d" Dec 05 09:34:12 crc kubenswrapper[4780]: I1205 09:34:12.586003 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.856812 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 09:34:21 crc kubenswrapper[4780]: E1205 09:34:21.857841 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d2ce5fa-2138-48bd-9af7-76d136e21dfe" containerName="tempest-tests-tempest-tests-runner" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.857860 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d2ce5fa-2138-48bd-9af7-76d136e21dfe" containerName="tempest-tests-tempest-tests-runner" Dec 05 09:34:21 crc kubenswrapper[4780]: E1205 09:34:21.857873 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerName="extract-utilities" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.857907 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerName="extract-utilities" Dec 05 09:34:21 crc kubenswrapper[4780]: E1205 09:34:21.857919 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerName="extract-content" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.857926 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerName="extract-content" Dec 05 09:34:21 crc kubenswrapper[4780]: E1205 09:34:21.857940 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerName="registry-server" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.857945 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerName="registry-server" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.858165 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e5db5a-70af-4022-94a9-828b18215a1f" containerName="registry-server" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.858182 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d2ce5fa-2138-48bd-9af7-76d136e21dfe" containerName="tempest-tests-tempest-tests-runner" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.858964 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.861700 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j5kk9" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.867164 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.983480 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfw9j\" (UniqueName: \"kubernetes.io/projected/8b3db936-5d89-4bde-8f47-5740a6bb4b93-kube-api-access-sfw9j\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8b3db936-5d89-4bde-8f47-5740a6bb4b93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 09:34:21 crc kubenswrapper[4780]: I1205 09:34:21.983592 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8b3db936-5d89-4bde-8f47-5740a6bb4b93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 09:34:22 crc kubenswrapper[4780]: I1205 09:34:22.085611 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfw9j\" (UniqueName: \"kubernetes.io/projected/8b3db936-5d89-4bde-8f47-5740a6bb4b93-kube-api-access-sfw9j\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8b3db936-5d89-4bde-8f47-5740a6bb4b93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 09:34:22 crc kubenswrapper[4780]: I1205 09:34:22.085728 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8b3db936-5d89-4bde-8f47-5740a6bb4b93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 09:34:22 crc kubenswrapper[4780]: I1205 09:34:22.086240 4780 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8b3db936-5d89-4bde-8f47-5740a6bb4b93\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 09:34:22 crc kubenswrapper[4780]: I1205 09:34:22.103715 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfw9j\" (UniqueName: \"kubernetes.io/projected/8b3db936-5d89-4bde-8f47-5740a6bb4b93-kube-api-access-sfw9j\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8b3db936-5d89-4bde-8f47-5740a6bb4b93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 09:34:22 crc kubenswrapper[4780]: I1205 09:34:22.112410 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8b3db936-5d89-4bde-8f47-5740a6bb4b93\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 09:34:22 crc kubenswrapper[4780]: I1205 09:34:22.182373 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 09:34:22 crc kubenswrapper[4780]: I1205 09:34:22.666022 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 09:34:22 crc kubenswrapper[4780]: I1205 09:34:22.670802 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 09:34:22 crc kubenswrapper[4780]: I1205 09:34:22.679252 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8b3db936-5d89-4bde-8f47-5740a6bb4b93","Type":"ContainerStarted","Data":"5b32061eb95379c887a0e1a2217a73815cb88fbca71d7179b47e92f98c4c2a2c"} Dec 05 09:34:23 crc kubenswrapper[4780]: I1205 09:34:23.690993 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8b3db936-5d89-4bde-8f47-5740a6bb4b93","Type":"ContainerStarted","Data":"38def102bc7fe582ca13118a288cd6f53e0883f14cae54935d029fc08116dff3"} Dec 05 09:34:29 crc kubenswrapper[4780]: I1205 09:34:29.907675 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:34:29 crc kubenswrapper[4780]: I1205 09:34:29.908249 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:34:29 crc kubenswrapper[4780]: I1205 09:34:29.908286 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 09:34:29 crc kubenswrapper[4780]: I1205 09:34:29.909071 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef6939699043fa89441f96fc078c37f2274134814a74fc97ac43e79e06e189f1"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 09:34:29 crc kubenswrapper[4780]: I1205 09:34:29.909127 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://ef6939699043fa89441f96fc078c37f2274134814a74fc97ac43e79e06e189f1" gracePeriod=600 Dec 05 09:34:30 crc kubenswrapper[4780]: I1205 09:34:30.760197 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="ef6939699043fa89441f96fc078c37f2274134814a74fc97ac43e79e06e189f1" exitCode=0 Dec 05 09:34:30 crc kubenswrapper[4780]: I1205 09:34:30.760281 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"ef6939699043fa89441f96fc078c37f2274134814a74fc97ac43e79e06e189f1"} Dec 05 09:34:30 crc kubenswrapper[4780]: I1205 09:34:30.760574 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242"} Dec 05 09:34:30 crc kubenswrapper[4780]: I1205 09:34:30.760597 4780 scope.go:117] "RemoveContainer" containerID="8023847dfe0e5beecd02ae459bd6b26de455881393e777f6566872790450f733" Dec 05 09:34:30 crc kubenswrapper[4780]: I1205 09:34:30.779958 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=9.015820446 podStartE2EDuration="9.779941448s" podCreationTimestamp="2025-12-05 09:34:21 +0000 UTC" firstStartedPulling="2025-12-05 09:34:22.670619427 +0000 UTC m=+10096.740135759" lastFinishedPulling="2025-12-05 09:34:23.434740429 +0000 UTC m=+10097.504256761" observedRunningTime="2025-12-05 09:34:23.706976581 +0000 UTC m=+10097.776492943" watchObservedRunningTime="2025-12-05 09:34:30.779941448 +0000 UTC m=+10104.849457780" Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.750961 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-spv9c/must-gather-t8z9v"] Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.759080 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/must-gather-t8z9v" Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.769627 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-spv9c"/"kube-root-ca.crt" Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.770110 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-spv9c"/"openshift-service-ca.crt" Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.857136 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmzc4\" (UniqueName: \"kubernetes.io/projected/39e33c36-514a-48aa-ac3d-8f5988371fc7-kube-api-access-wmzc4\") pod \"must-gather-t8z9v\" (UID: \"39e33c36-514a-48aa-ac3d-8f5988371fc7\") " pod="openshift-must-gather-spv9c/must-gather-t8z9v" Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.857539 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/39e33c36-514a-48aa-ac3d-8f5988371fc7-must-gather-output\") pod \"must-gather-t8z9v\" (UID: \"39e33c36-514a-48aa-ac3d-8f5988371fc7\") " pod="openshift-must-gather-spv9c/must-gather-t8z9v" Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.858794 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-spv9c/must-gather-t8z9v"] Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.959993 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmzc4\" (UniqueName: \"kubernetes.io/projected/39e33c36-514a-48aa-ac3d-8f5988371fc7-kube-api-access-wmzc4\") pod \"must-gather-t8z9v\" (UID: \"39e33c36-514a-48aa-ac3d-8f5988371fc7\") " pod="openshift-must-gather-spv9c/must-gather-t8z9v" Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.960102 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/39e33c36-514a-48aa-ac3d-8f5988371fc7-must-gather-output\") pod \"must-gather-t8z9v\" (UID: \"39e33c36-514a-48aa-ac3d-8f5988371fc7\") " pod="openshift-must-gather-spv9c/must-gather-t8z9v" Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.960653 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/39e33c36-514a-48aa-ac3d-8f5988371fc7-must-gather-output\") pod \"must-gather-t8z9v\" (UID: \"39e33c36-514a-48aa-ac3d-8f5988371fc7\") " pod="openshift-must-gather-spv9c/must-gather-t8z9v" Dec 05 09:35:31 crc kubenswrapper[4780]: I1205 09:35:31.981838 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmzc4\" (UniqueName: \"kubernetes.io/projected/39e33c36-514a-48aa-ac3d-8f5988371fc7-kube-api-access-wmzc4\") pod \"must-gather-t8z9v\" (UID: \"39e33c36-514a-48aa-ac3d-8f5988371fc7\") " pod="openshift-must-gather-spv9c/must-gather-t8z9v" Dec 05 09:35:32 crc kubenswrapper[4780]: I1205 09:35:32.330508 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/must-gather-t8z9v" Dec 05 09:35:33 crc kubenswrapper[4780]: I1205 09:35:33.062795 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-spv9c/must-gather-t8z9v"] Dec 05 09:35:33 crc kubenswrapper[4780]: I1205 09:35:33.364409 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/must-gather-t8z9v" event={"ID":"39e33c36-514a-48aa-ac3d-8f5988371fc7","Type":"ContainerStarted","Data":"d59e97139d5be6eb7154abbcfd896c6574b681d2d53d261251568769d47c728d"} Dec 05 09:35:37 crc kubenswrapper[4780]: I1205 09:35:37.416388 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/must-gather-t8z9v" event={"ID":"39e33c36-514a-48aa-ac3d-8f5988371fc7","Type":"ContainerStarted","Data":"a8b2e641797330c31494d40f5970b4d01834b7403e4060cd5137c10d3e6fc429"} Dec 05 09:35:37 crc kubenswrapper[4780]: I1205 09:35:37.416839 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/must-gather-t8z9v" event={"ID":"39e33c36-514a-48aa-ac3d-8f5988371fc7","Type":"ContainerStarted","Data":"7657ed80e699366adc2316ca16297e32d48b536bbc2b63c716cb85a5973fc8e2"} Dec 05 09:35:37 crc kubenswrapper[4780]: I1205 09:35:37.441258 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-spv9c/must-gather-t8z9v" podStartSLOduration=2.888050551 podStartE2EDuration="6.441240344s" podCreationTimestamp="2025-12-05 09:35:31 +0000 UTC" firstStartedPulling="2025-12-05 09:35:33.076232535 +0000 UTC m=+10167.145748857" lastFinishedPulling="2025-12-05 09:35:36.629422318 +0000 UTC m=+10170.698938650" observedRunningTime="2025-12-05 09:35:37.440942676 +0000 UTC m=+10171.510459008" watchObservedRunningTime="2025-12-05 09:35:37.441240344 +0000 UTC m=+10171.510756676" Dec 05 09:35:41 crc kubenswrapper[4780]: I1205 09:35:41.484074 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-spv9c/crc-debug-jcqs2"] Dec 05 09:35:41 crc kubenswrapper[4780]: I1205 09:35:41.486036 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-jcqs2" Dec 05 09:35:41 crc kubenswrapper[4780]: I1205 09:35:41.488531 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-spv9c"/"default-dockercfg-zxctr" Dec 05 09:35:41 crc kubenswrapper[4780]: I1205 09:35:41.631004 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp5wm\" (UniqueName: \"kubernetes.io/projected/be26cd03-3530-4d8b-9b95-dd492927e6dc-kube-api-access-fp5wm\") pod \"crc-debug-jcqs2\" (UID: \"be26cd03-3530-4d8b-9b95-dd492927e6dc\") " pod="openshift-must-gather-spv9c/crc-debug-jcqs2" Dec 05 09:35:41 crc kubenswrapper[4780]: I1205 09:35:41.631432 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be26cd03-3530-4d8b-9b95-dd492927e6dc-host\") pod \"crc-debug-jcqs2\" (UID: \"be26cd03-3530-4d8b-9b95-dd492927e6dc\") " pod="openshift-must-gather-spv9c/crc-debug-jcqs2" Dec 05 09:35:41 crc kubenswrapper[4780]: I1205 09:35:41.733021 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be26cd03-3530-4d8b-9b95-dd492927e6dc-host\") pod \"crc-debug-jcqs2\" (UID: \"be26cd03-3530-4d8b-9b95-dd492927e6dc\") " pod="openshift-must-gather-spv9c/crc-debug-jcqs2" Dec 05 09:35:41 crc kubenswrapper[4780]: I1205 09:35:41.733182 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be26cd03-3530-4d8b-9b95-dd492927e6dc-host\") pod \"crc-debug-jcqs2\" (UID: \"be26cd03-3530-4d8b-9b95-dd492927e6dc\") " pod="openshift-must-gather-spv9c/crc-debug-jcqs2" Dec 05 09:35:41 crc kubenswrapper[4780]: I1205 09:35:41.733196 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp5wm\" (UniqueName: \"kubernetes.io/projected/be26cd03-3530-4d8b-9b95-dd492927e6dc-kube-api-access-fp5wm\") pod \"crc-debug-jcqs2\" (UID: \"be26cd03-3530-4d8b-9b95-dd492927e6dc\") " pod="openshift-must-gather-spv9c/crc-debug-jcqs2" Dec 05 09:35:41 crc kubenswrapper[4780]: I1205 09:35:41.759781 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp5wm\" (UniqueName: \"kubernetes.io/projected/be26cd03-3530-4d8b-9b95-dd492927e6dc-kube-api-access-fp5wm\") pod \"crc-debug-jcqs2\" (UID: \"be26cd03-3530-4d8b-9b95-dd492927e6dc\") " pod="openshift-must-gather-spv9c/crc-debug-jcqs2" Dec 05 09:35:41 crc kubenswrapper[4780]: I1205 09:35:41.811832 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-jcqs2" Dec 05 09:35:41 crc kubenswrapper[4780]: W1205 09:35:41.852610 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe26cd03_3530_4d8b_9b95_dd492927e6dc.slice/crio-622c26393a7070a3b62669d7bbee8919a72a047efcb4be043c14e3efc74ad832 WatchSource:0}: Error finding container 622c26393a7070a3b62669d7bbee8919a72a047efcb4be043c14e3efc74ad832: Status 404 returned error can't find the container with id 622c26393a7070a3b62669d7bbee8919a72a047efcb4be043c14e3efc74ad832 Dec 05 09:35:42 crc kubenswrapper[4780]: I1205 09:35:42.468580 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/crc-debug-jcqs2" event={"ID":"be26cd03-3530-4d8b-9b95-dd492927e6dc","Type":"ContainerStarted","Data":"622c26393a7070a3b62669d7bbee8919a72a047efcb4be043c14e3efc74ad832"} Dec 05 09:35:53 crc kubenswrapper[4780]: I1205 09:35:53.602821 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/crc-debug-jcqs2" event={"ID":"be26cd03-3530-4d8b-9b95-dd492927e6dc","Type":"ContainerStarted","Data":"faadde9b4f9fbd1bd6710474132412ab67b426e74c295ef17aac085af34e60ce"} Dec 05 09:35:53 crc kubenswrapper[4780]: I1205 09:35:53.620836 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-spv9c/crc-debug-jcqs2" podStartSLOduration=1.810849401 podStartE2EDuration="12.620818012s" podCreationTimestamp="2025-12-05 09:35:41 +0000 UTC" firstStartedPulling="2025-12-05 09:35:41.854537403 +0000 UTC m=+10175.924053735" lastFinishedPulling="2025-12-05 09:35:52.664506014 +0000 UTC m=+10186.734022346" observedRunningTime="2025-12-05 09:35:53.615384056 +0000 UTC m=+10187.684900398" watchObservedRunningTime="2025-12-05 09:35:53.620818012 +0000 UTC m=+10187.690334344" Dec 05 09:36:44 crc kubenswrapper[4780]: I1205 09:36:44.082084 4780 generic.go:334] "Generic (PLEG): container finished" podID="be26cd03-3530-4d8b-9b95-dd492927e6dc" containerID="faadde9b4f9fbd1bd6710474132412ab67b426e74c295ef17aac085af34e60ce" exitCode=0 Dec 05 09:36:44 crc kubenswrapper[4780]: I1205 09:36:44.082173 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/crc-debug-jcqs2" event={"ID":"be26cd03-3530-4d8b-9b95-dd492927e6dc","Type":"ContainerDied","Data":"faadde9b4f9fbd1bd6710474132412ab67b426e74c295ef17aac085af34e60ce"} Dec 05 09:36:45 crc kubenswrapper[4780]: I1205 09:36:45.205839 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-jcqs2" Dec 05 09:36:45 crc kubenswrapper[4780]: I1205 09:36:45.248184 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-spv9c/crc-debug-jcqs2"] Dec 05 09:36:45 crc kubenswrapper[4780]: I1205 09:36:45.263342 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-spv9c/crc-debug-jcqs2"] Dec 05 09:36:45 crc kubenswrapper[4780]: I1205 09:36:45.334275 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp5wm\" (UniqueName: \"kubernetes.io/projected/be26cd03-3530-4d8b-9b95-dd492927e6dc-kube-api-access-fp5wm\") pod \"be26cd03-3530-4d8b-9b95-dd492927e6dc\" (UID: \"be26cd03-3530-4d8b-9b95-dd492927e6dc\") " Dec 05 09:36:45 crc kubenswrapper[4780]: I1205 09:36:45.334348 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be26cd03-3530-4d8b-9b95-dd492927e6dc-host\") pod \"be26cd03-3530-4d8b-9b95-dd492927e6dc\" (UID: \"be26cd03-3530-4d8b-9b95-dd492927e6dc\") " Dec 05 09:36:45 crc kubenswrapper[4780]: I1205 09:36:45.334491 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be26cd03-3530-4d8b-9b95-dd492927e6dc-host" (OuterVolumeSpecName: "host") pod "be26cd03-3530-4d8b-9b95-dd492927e6dc" (UID: "be26cd03-3530-4d8b-9b95-dd492927e6dc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 09:36:45 crc kubenswrapper[4780]: I1205 09:36:45.334790 4780 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be26cd03-3530-4d8b-9b95-dd492927e6dc-host\") on node \"crc\" DevicePath \"\"" Dec 05 09:36:45 crc kubenswrapper[4780]: I1205 09:36:45.340117 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be26cd03-3530-4d8b-9b95-dd492927e6dc-kube-api-access-fp5wm" (OuterVolumeSpecName: "kube-api-access-fp5wm") pod "be26cd03-3530-4d8b-9b95-dd492927e6dc" (UID: "be26cd03-3530-4d8b-9b95-dd492927e6dc"). InnerVolumeSpecName "kube-api-access-fp5wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:36:45 crc kubenswrapper[4780]: I1205 09:36:45.436553 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp5wm\" (UniqueName: \"kubernetes.io/projected/be26cd03-3530-4d8b-9b95-dd492927e6dc-kube-api-access-fp5wm\") on node \"crc\" DevicePath \"\"" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.100635 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="622c26393a7070a3b62669d7bbee8919a72a047efcb4be043c14e3efc74ad832" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.100707 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-jcqs2" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.150054 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be26cd03-3530-4d8b-9b95-dd492927e6dc" path="/var/lib/kubelet/pods/be26cd03-3530-4d8b-9b95-dd492927e6dc/volumes" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.398571 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-spv9c/crc-debug-xwskl"] Dec 05 09:36:46 crc kubenswrapper[4780]: E1205 09:36:46.398988 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be26cd03-3530-4d8b-9b95-dd492927e6dc" containerName="container-00" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.399001 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="be26cd03-3530-4d8b-9b95-dd492927e6dc" containerName="container-00" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.399193 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="be26cd03-3530-4d8b-9b95-dd492927e6dc" containerName="container-00" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.399865 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-xwskl" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.401491 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-spv9c"/"default-dockercfg-zxctr" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.557993 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtbhf\" (UniqueName: \"kubernetes.io/projected/05314c02-6729-4695-9b92-6fc43eb9a614-kube-api-access-dtbhf\") pod \"crc-debug-xwskl\" (UID: \"05314c02-6729-4695-9b92-6fc43eb9a614\") " pod="openshift-must-gather-spv9c/crc-debug-xwskl" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.558050 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05314c02-6729-4695-9b92-6fc43eb9a614-host\") pod \"crc-debug-xwskl\" (UID: \"05314c02-6729-4695-9b92-6fc43eb9a614\") " pod="openshift-must-gather-spv9c/crc-debug-xwskl" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.659696 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtbhf\" (UniqueName: \"kubernetes.io/projected/05314c02-6729-4695-9b92-6fc43eb9a614-kube-api-access-dtbhf\") pod \"crc-debug-xwskl\" (UID: \"05314c02-6729-4695-9b92-6fc43eb9a614\") " pod="openshift-must-gather-spv9c/crc-debug-xwskl" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.659754 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05314c02-6729-4695-9b92-6fc43eb9a614-host\") pod \"crc-debug-xwskl\" (UID: \"05314c02-6729-4695-9b92-6fc43eb9a614\") " pod="openshift-must-gather-spv9c/crc-debug-xwskl" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.659900 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05314c02-6729-4695-9b92-6fc43eb9a614-host\") pod \"crc-debug-xwskl\" (UID: \"05314c02-6729-4695-9b92-6fc43eb9a614\") " pod="openshift-must-gather-spv9c/crc-debug-xwskl" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.677680 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtbhf\" (UniqueName: \"kubernetes.io/projected/05314c02-6729-4695-9b92-6fc43eb9a614-kube-api-access-dtbhf\") pod \"crc-debug-xwskl\" (UID: \"05314c02-6729-4695-9b92-6fc43eb9a614\") " pod="openshift-must-gather-spv9c/crc-debug-xwskl" Dec 05 09:36:46 crc kubenswrapper[4780]: I1205 09:36:46.716149 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-xwskl" Dec 05 09:36:47 crc kubenswrapper[4780]: I1205 09:36:47.111764 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/crc-debug-xwskl" event={"ID":"05314c02-6729-4695-9b92-6fc43eb9a614","Type":"ContainerStarted","Data":"4823d433fdc9404220c9c187e666bffe89a1617111eb612b093755b9f9c92236"} Dec 05 09:36:47 crc kubenswrapper[4780]: I1205 09:36:47.112422 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/crc-debug-xwskl" event={"ID":"05314c02-6729-4695-9b92-6fc43eb9a614","Type":"ContainerStarted","Data":"63eca2b4ce17ca03bcd2c587e025dde977168651555dbf6fa87b1401cb16a5e8"} Dec 05 09:36:47 crc kubenswrapper[4780]: I1205 09:36:47.130156 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-spv9c/crc-debug-xwskl" podStartSLOduration=1.130131757 podStartE2EDuration="1.130131757s" podCreationTimestamp="2025-12-05 09:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 09:36:47.124479235 +0000 UTC m=+10241.193995567" watchObservedRunningTime="2025-12-05 09:36:47.130131757 +0000 UTC m=+10241.199648089" Dec 05 09:36:48 crc kubenswrapper[4780]: I1205 09:36:48.122609 4780 generic.go:334] "Generic (PLEG): container finished" podID="05314c02-6729-4695-9b92-6fc43eb9a614" containerID="4823d433fdc9404220c9c187e666bffe89a1617111eb612b093755b9f9c92236" exitCode=0 Dec 05 09:36:48 crc kubenswrapper[4780]: I1205 09:36:48.122680 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/crc-debug-xwskl" event={"ID":"05314c02-6729-4695-9b92-6fc43eb9a614","Type":"ContainerDied","Data":"4823d433fdc9404220c9c187e666bffe89a1617111eb612b093755b9f9c92236"} Dec 05 09:36:49 crc kubenswrapper[4780]: I1205 09:36:49.255585 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-xwskl" Dec 05 09:36:49 crc kubenswrapper[4780]: I1205 09:36:49.409919 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05314c02-6729-4695-9b92-6fc43eb9a614-host\") pod \"05314c02-6729-4695-9b92-6fc43eb9a614\" (UID: \"05314c02-6729-4695-9b92-6fc43eb9a614\") " Dec 05 09:36:49 crc kubenswrapper[4780]: I1205 09:36:49.410086 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtbhf\" (UniqueName: \"kubernetes.io/projected/05314c02-6729-4695-9b92-6fc43eb9a614-kube-api-access-dtbhf\") pod \"05314c02-6729-4695-9b92-6fc43eb9a614\" (UID: \"05314c02-6729-4695-9b92-6fc43eb9a614\") " Dec 05 09:36:49 crc kubenswrapper[4780]: I1205 09:36:49.410214 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05314c02-6729-4695-9b92-6fc43eb9a614-host" (OuterVolumeSpecName: "host") pod "05314c02-6729-4695-9b92-6fc43eb9a614" (UID: "05314c02-6729-4695-9b92-6fc43eb9a614"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 09:36:49 crc kubenswrapper[4780]: I1205 09:36:49.410823 4780 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05314c02-6729-4695-9b92-6fc43eb9a614-host\") on node \"crc\" DevicePath \"\"" Dec 05 09:36:49 crc kubenswrapper[4780]: I1205 09:36:49.421455 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05314c02-6729-4695-9b92-6fc43eb9a614-kube-api-access-dtbhf" (OuterVolumeSpecName: "kube-api-access-dtbhf") pod "05314c02-6729-4695-9b92-6fc43eb9a614" (UID: "05314c02-6729-4695-9b92-6fc43eb9a614"). InnerVolumeSpecName "kube-api-access-dtbhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:36:49 crc kubenswrapper[4780]: I1205 09:36:49.512468 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtbhf\" (UniqueName: \"kubernetes.io/projected/05314c02-6729-4695-9b92-6fc43eb9a614-kube-api-access-dtbhf\") on node \"crc\" DevicePath \"\"" Dec 05 09:36:50 crc kubenswrapper[4780]: I1205 09:36:50.140144 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-xwskl" Dec 05 09:36:50 crc kubenswrapper[4780]: I1205 09:36:50.150074 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/crc-debug-xwskl" event={"ID":"05314c02-6729-4695-9b92-6fc43eb9a614","Type":"ContainerDied","Data":"63eca2b4ce17ca03bcd2c587e025dde977168651555dbf6fa87b1401cb16a5e8"} Dec 05 09:36:50 crc kubenswrapper[4780]: I1205 09:36:50.150112 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63eca2b4ce17ca03bcd2c587e025dde977168651555dbf6fa87b1401cb16a5e8" Dec 05 09:36:50 crc kubenswrapper[4780]: I1205 09:36:50.166108 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-spv9c/crc-debug-xwskl"] Dec 05 09:36:50 crc kubenswrapper[4780]: I1205 09:36:50.179646 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-spv9c/crc-debug-xwskl"] Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.313758 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-spv9c/crc-debug-qjqnz"] Dec 05 09:36:51 crc kubenswrapper[4780]: E1205 09:36:51.314503 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05314c02-6729-4695-9b92-6fc43eb9a614" containerName="container-00" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.314516 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="05314c02-6729-4695-9b92-6fc43eb9a614" containerName="container-00" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.314700 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="05314c02-6729-4695-9b92-6fc43eb9a614" containerName="container-00" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.315463 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-qjqnz" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.318102 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-spv9c"/"default-dockercfg-zxctr" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.459151 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdc5r\" (UniqueName: \"kubernetes.io/projected/ee24d963-6de9-4fe5-9013-97ba012fa45a-kube-api-access-mdc5r\") pod \"crc-debug-qjqnz\" (UID: \"ee24d963-6de9-4fe5-9013-97ba012fa45a\") " pod="openshift-must-gather-spv9c/crc-debug-qjqnz" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.459240 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee24d963-6de9-4fe5-9013-97ba012fa45a-host\") pod \"crc-debug-qjqnz\" (UID: \"ee24d963-6de9-4fe5-9013-97ba012fa45a\") " pod="openshift-must-gather-spv9c/crc-debug-qjqnz" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.562215 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdc5r\" (UniqueName: \"kubernetes.io/projected/ee24d963-6de9-4fe5-9013-97ba012fa45a-kube-api-access-mdc5r\") pod \"crc-debug-qjqnz\" (UID: \"ee24d963-6de9-4fe5-9013-97ba012fa45a\") " pod="openshift-must-gather-spv9c/crc-debug-qjqnz" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.562316 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee24d963-6de9-4fe5-9013-97ba012fa45a-host\") pod \"crc-debug-qjqnz\" (UID: \"ee24d963-6de9-4fe5-9013-97ba012fa45a\") " pod="openshift-must-gather-spv9c/crc-debug-qjqnz" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.562691 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee24d963-6de9-4fe5-9013-97ba012fa45a-host\") pod \"crc-debug-qjqnz\" (UID: \"ee24d963-6de9-4fe5-9013-97ba012fa45a\") " pod="openshift-must-gather-spv9c/crc-debug-qjqnz" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.584182 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdc5r\" (UniqueName: \"kubernetes.io/projected/ee24d963-6de9-4fe5-9013-97ba012fa45a-kube-api-access-mdc5r\") pod \"crc-debug-qjqnz\" (UID: \"ee24d963-6de9-4fe5-9013-97ba012fa45a\") " pod="openshift-must-gather-spv9c/crc-debug-qjqnz" Dec 05 09:36:51 crc kubenswrapper[4780]: I1205 09:36:51.639524 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-qjqnz" Dec 05 09:36:51 crc kubenswrapper[4780]: W1205 09:36:51.668575 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee24d963_6de9_4fe5_9013_97ba012fa45a.slice/crio-87c5e280f24ac9f4aa2b7a9e9aa5474ec56ff406fbac503395c8250d166b9359 WatchSource:0}: Error finding container 87c5e280f24ac9f4aa2b7a9e9aa5474ec56ff406fbac503395c8250d166b9359: Status 404 returned error can't find the container with id 87c5e280f24ac9f4aa2b7a9e9aa5474ec56ff406fbac503395c8250d166b9359 Dec 05 09:36:52 crc kubenswrapper[4780]: I1205 09:36:52.149249 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05314c02-6729-4695-9b92-6fc43eb9a614" path="/var/lib/kubelet/pods/05314c02-6729-4695-9b92-6fc43eb9a614/volumes" Dec 05 09:36:52 crc kubenswrapper[4780]: I1205 09:36:52.156406 4780 generic.go:334] "Generic (PLEG): container finished" podID="ee24d963-6de9-4fe5-9013-97ba012fa45a" containerID="90012b3f739ceaf678157abcf0929f7d711ff1ba1997c5ce0dd019d01f7a1755" exitCode=0 Dec 05 09:36:52 crc kubenswrapper[4780]: I1205 09:36:52.156497 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/crc-debug-qjqnz" event={"ID":"ee24d963-6de9-4fe5-9013-97ba012fa45a","Type":"ContainerDied","Data":"90012b3f739ceaf678157abcf0929f7d711ff1ba1997c5ce0dd019d01f7a1755"} Dec 05 09:36:52 crc kubenswrapper[4780]: I1205 09:36:52.156701 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/crc-debug-qjqnz" event={"ID":"ee24d963-6de9-4fe5-9013-97ba012fa45a","Type":"ContainerStarted","Data":"87c5e280f24ac9f4aa2b7a9e9aa5474ec56ff406fbac503395c8250d166b9359"} Dec 05 09:36:52 crc kubenswrapper[4780]: I1205 09:36:52.191507 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-spv9c/crc-debug-qjqnz"] Dec 05 09:36:52 crc kubenswrapper[4780]: I1205 09:36:52.199950 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-spv9c/crc-debug-qjqnz"] Dec 05 09:36:53 crc kubenswrapper[4780]: I1205 09:36:53.295514 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-qjqnz" Dec 05 09:36:53 crc kubenswrapper[4780]: I1205 09:36:53.397870 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdc5r\" (UniqueName: \"kubernetes.io/projected/ee24d963-6de9-4fe5-9013-97ba012fa45a-kube-api-access-mdc5r\") pod \"ee24d963-6de9-4fe5-9013-97ba012fa45a\" (UID: \"ee24d963-6de9-4fe5-9013-97ba012fa45a\") " Dec 05 09:36:53 crc kubenswrapper[4780]: I1205 09:36:53.398245 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee24d963-6de9-4fe5-9013-97ba012fa45a-host\") pod \"ee24d963-6de9-4fe5-9013-97ba012fa45a\" (UID: \"ee24d963-6de9-4fe5-9013-97ba012fa45a\") " Dec 05 09:36:53 crc kubenswrapper[4780]: I1205 09:36:53.398524 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee24d963-6de9-4fe5-9013-97ba012fa45a-host" (OuterVolumeSpecName: "host") pod "ee24d963-6de9-4fe5-9013-97ba012fa45a" (UID: "ee24d963-6de9-4fe5-9013-97ba012fa45a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 09:36:53 crc kubenswrapper[4780]: I1205 09:36:53.399100 4780 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee24d963-6de9-4fe5-9013-97ba012fa45a-host\") on node \"crc\" DevicePath \"\"" Dec 05 09:36:53 crc kubenswrapper[4780]: I1205 09:36:53.406149 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee24d963-6de9-4fe5-9013-97ba012fa45a-kube-api-access-mdc5r" (OuterVolumeSpecName: "kube-api-access-mdc5r") pod "ee24d963-6de9-4fe5-9013-97ba012fa45a" (UID: "ee24d963-6de9-4fe5-9013-97ba012fa45a"). InnerVolumeSpecName "kube-api-access-mdc5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:36:53 crc kubenswrapper[4780]: I1205 09:36:53.501229 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdc5r\" (UniqueName: \"kubernetes.io/projected/ee24d963-6de9-4fe5-9013-97ba012fa45a-kube-api-access-mdc5r\") on node \"crc\" DevicePath \"\"" Dec 05 09:36:54 crc kubenswrapper[4780]: I1205 09:36:54.150078 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee24d963-6de9-4fe5-9013-97ba012fa45a" path="/var/lib/kubelet/pods/ee24d963-6de9-4fe5-9013-97ba012fa45a/volumes" Dec 05 09:36:54 crc kubenswrapper[4780]: I1205 09:36:54.177359 4780 scope.go:117] "RemoveContainer" containerID="90012b3f739ceaf678157abcf0929f7d711ff1ba1997c5ce0dd019d01f7a1755" Dec 05 09:36:54 crc kubenswrapper[4780]: I1205 09:36:54.177460 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/crc-debug-qjqnz" Dec 05 09:36:59 crc kubenswrapper[4780]: I1205 09:36:59.907634 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:36:59 crc kubenswrapper[4780]: I1205 09:36:59.908283 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:37:29 crc kubenswrapper[4780]: I1205 09:37:29.908257 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:37:29 crc kubenswrapper[4780]: I1205 09:37:29.908802 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:37:59 crc kubenswrapper[4780]: I1205 09:37:59.907929 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:37:59 crc kubenswrapper[4780]: I1205 09:37:59.908840 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:37:59 crc kubenswrapper[4780]: I1205 09:37:59.908917 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 09:37:59 crc kubenswrapper[4780]: I1205 09:37:59.909866 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 09:37:59 crc kubenswrapper[4780]: I1205 09:37:59.909951 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" gracePeriod=600 Dec 05 09:38:00 crc kubenswrapper[4780]: E1205 09:38:00.030452 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:38:00 crc kubenswrapper[4780]: I1205 09:38:00.795294 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" exitCode=0 Dec 05 09:38:00 crc kubenswrapper[4780]: I1205 09:38:00.795332 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242"} Dec 05 09:38:00 crc kubenswrapper[4780]: I1205 09:38:00.795380 4780 scope.go:117] "RemoveContainer" containerID="ef6939699043fa89441f96fc078c37f2274134814a74fc97ac43e79e06e189f1" Dec 05 09:38:00 crc kubenswrapper[4780]: I1205 09:38:00.797031 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:38:00 crc kubenswrapper[4780]: E1205 09:38:00.797614 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:38:12 crc kubenswrapper[4780]: I1205 09:38:12.139356 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:38:12 crc kubenswrapper[4780]: E1205 09:38:12.140374 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:38:23 crc kubenswrapper[4780]: I1205 09:38:23.140846 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:38:23 crc kubenswrapper[4780]: E1205 09:38:23.142211 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:38:33 crc kubenswrapper[4780]: I1205 09:38:33.670114 4780 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" podUID="df87efac-4c66-45a5-86d4-9a36f7e21a53" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.53:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 09:38:33 crc kubenswrapper[4780]: I1205 09:38:33.693903 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4plmn" podUID="df87efac-4c66-45a5-86d4-9a36f7e21a53" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.53:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 09:38:33 crc kubenswrapper[4780]: I1205 09:38:33.693951 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-27nsf" podUID="b354ba59-4664-4c61-abe6-e31896facfa5" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 09:38:36 crc kubenswrapper[4780]: I1205 09:38:36.146913 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:38:36 crc kubenswrapper[4780]: E1205 09:38:36.147403 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:38:44 crc kubenswrapper[4780]: I1205 09:38:44.031998 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_93e192aa-558f-423c-9ed8-d0e110dab4fc/init-config-reloader/0.log" Dec 05 09:38:44 crc kubenswrapper[4780]: I1205 09:38:44.280777 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_93e192aa-558f-423c-9ed8-d0e110dab4fc/alertmanager/0.log" Dec 05 09:38:44 crc kubenswrapper[4780]: I1205 09:38:44.305180 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_93e192aa-558f-423c-9ed8-d0e110dab4fc/init-config-reloader/0.log" Dec 05 09:38:44 crc kubenswrapper[4780]: I1205 09:38:44.379144 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_93e192aa-558f-423c-9ed8-d0e110dab4fc/config-reloader/0.log" Dec 05 09:38:44 crc kubenswrapper[4780]: I1205 09:38:44.508555 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_22765156-6f4d-420a-a071-68e4c7eae696/aodh-api/0.log" Dec 05 09:38:44 crc kubenswrapper[4780]: I1205 09:38:44.604977 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_22765156-6f4d-420a-a071-68e4c7eae696/aodh-evaluator/0.log" Dec 05 09:38:44 crc kubenswrapper[4780]: I1205 09:38:44.690534 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_22765156-6f4d-420a-a071-68e4c7eae696/aodh-listener/0.log" Dec 05 09:38:44 crc kubenswrapper[4780]: I1205 09:38:44.729429 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_22765156-6f4d-420a-a071-68e4c7eae696/aodh-notifier/0.log" Dec 05 09:38:44 crc kubenswrapper[4780]: I1205 09:38:44.841259 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55486c8ff8-8th4d_e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e/barbican-api/0.log" Dec 05 09:38:44 crc kubenswrapper[4780]: I1205 09:38:44.898709 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55486c8ff8-8th4d_e66ff5c5-7e7c-4ee2-9395-a143e5dc0c1e/barbican-api-log/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.056437 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b5984894b-tnszk_40e45167-1f28-490f-aa73-35137f2d0f1a/barbican-keystone-listener/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.239979 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d7cbcbfd9-hlgct_dfccf240-dcd1-4f3d-92c5-3c195a1a481d/barbican-worker/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.295571 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d7cbcbfd9-hlgct_dfccf240-dcd1-4f3d-92c5-3c195a1a481d/barbican-worker-log/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.420474 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b5984894b-tnszk_40e45167-1f28-490f-aa73-35137f2d0f1a/barbican-keystone-listener-log/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.495327 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-kp2b5_68a58c6a-0807-4975-aa44-5963fb679676/bootstrap-openstack-openstack-cell1/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.666716 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5a2ccc7e-6a76-4f09-893f-243fe7cee6d2/ceilometer-central-agent/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.699193 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5a2ccc7e-6a76-4f09-893f-243fe7cee6d2/ceilometer-notification-agent/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.721913 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5a2ccc7e-6a76-4f09-893f-243fe7cee6d2/proxy-httpd/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.827227 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5a2ccc7e-6a76-4f09-893f-243fe7cee6d2/sg-core/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.933331 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4e0d923c-9a65-4436-bb37-eda463dd8de7/cinder-api/0.log" Dec 05 09:38:45 crc kubenswrapper[4780]: I1205 09:38:45.937729 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4e0d923c-9a65-4436-bb37-eda463dd8de7/cinder-api-log/0.log" Dec 05 09:38:46 crc kubenswrapper[4780]: I1205 09:38:46.117204 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3636634a-6a80-4605-89e3-6b2f3f4e6f0c/cinder-scheduler/0.log" Dec 05 09:38:46 crc kubenswrapper[4780]: I1205 09:38:46.168438 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3636634a-6a80-4605-89e3-6b2f3f4e6f0c/probe/0.log" Dec 05 09:38:46 crc kubenswrapper[4780]: I1205 09:38:46.305251 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-mnk58_dcdd5c3f-33bd-4587-8757-39d37d86d865/configure-network-openstack-openstack-cell1/0.log" Dec 05 09:38:46 crc kubenswrapper[4780]: I1205 09:38:46.388235 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-76cmr_0c867c8f-f5c9-46ab-bcf3-6f8ba0b1b8e5/configure-os-openstack-openstack-cell1/0.log" Dec 05 09:38:46 crc kubenswrapper[4780]: I1205 09:38:46.718432 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d8b8bbc9-ptbht_e569a1ba-a14b-4d20-b449-7eaea024d0e6/init/0.log" Dec 05 09:38:46 crc kubenswrapper[4780]: I1205 09:38:46.880792 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d8b8bbc9-ptbht_e569a1ba-a14b-4d20-b449-7eaea024d0e6/init/0.log" Dec 05 09:38:46 crc kubenswrapper[4780]: I1205 09:38:46.992594 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-hkp2s_b7de0c87-8682-4630-acd8-c79d7cfa4884/download-cache-openstack-openstack-cell1/0.log" Dec 05 09:38:47 crc kubenswrapper[4780]: I1205 09:38:47.009460 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6d8b8bbc9-ptbht_e569a1ba-a14b-4d20-b449-7eaea024d0e6/dnsmasq-dns/0.log" Dec 05 09:38:47 crc kubenswrapper[4780]: I1205 09:38:47.186061 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f46ee8ba-7661-4f55-a82c-c02c35272b58/glance-httpd/0.log" Dec 05 09:38:47 crc kubenswrapper[4780]: I1205 09:38:47.228061 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f46ee8ba-7661-4f55-a82c-c02c35272b58/glance-log/0.log" Dec 05 09:38:47 crc kubenswrapper[4780]: I1205 09:38:47.391946 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_98d6811c-d6a8-4641-9b0d-d0b977125526/glance-log/0.log" Dec 05 09:38:47 crc kubenswrapper[4780]: I1205 09:38:47.423251 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_98d6811c-d6a8-4641-9b0d-d0b977125526/glance-httpd/0.log" Dec 05 09:38:47 crc kubenswrapper[4780]: I1205 09:38:47.970772 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5dd47f8876-qrmx7_ae3f2dd4-7f98-4fad-a85b-a2fec900c371/heat-engine/0.log" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.141637 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5894959478-n5k57_d7dba673-a0f0-4f16-8680-4701afda88b9/heat-api/0.log" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.214279 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6666c5554b-zfm84_893ee1ed-bef7-42d5-9582-af53d85b3d6d/horizon/0.log" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.367671 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-jhkh5_2ae81285-c333-416d-b78f-dceba6c3ffda/install-certs-openstack-openstack-cell1/0.log" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.439973 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-576847467c-n7nz9_fdbbc584-4e90-4427-88e8-88fe14a459f6/heat-cfnapi/0.log" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.572959 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-z89qw_9a5eaea9-50ed-4f64-8c7e-4f92de056ed7/install-os-openstack-openstack-cell1/0.log" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.590014 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nbcpx"] Dec 05 09:38:48 crc kubenswrapper[4780]: E1205 09:38:48.590537 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee24d963-6de9-4fe5-9013-97ba012fa45a" containerName="container-00" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.590560 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee24d963-6de9-4fe5-9013-97ba012fa45a" containerName="container-00" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.590834 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee24d963-6de9-4fe5-9013-97ba012fa45a" containerName="container-00" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.592484 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.610793 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbcpx"] Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.717214 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c72bd\" (UniqueName: \"kubernetes.io/projected/5dd21289-8c0e-4763-923a-2adeeb3ef17b-kube-api-access-c72bd\") pod \"redhat-marketplace-nbcpx\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.717672 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-catalog-content\") pod \"redhat-marketplace-nbcpx\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.717864 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-utilities\") pod \"redhat-marketplace-nbcpx\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.821151 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-catalog-content\") pod \"redhat-marketplace-nbcpx\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.821464 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-utilities\") pod \"redhat-marketplace-nbcpx\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.821569 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c72bd\" (UniqueName: \"kubernetes.io/projected/5dd21289-8c0e-4763-923a-2adeeb3ef17b-kube-api-access-c72bd\") pod \"redhat-marketplace-nbcpx\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.821858 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-utilities\") pod \"redhat-marketplace-nbcpx\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.822165 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-catalog-content\") pod \"redhat-marketplace-nbcpx\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.841837 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6666c5554b-zfm84_893ee1ed-bef7-42d5-9582-af53d85b3d6d/horizon-log/0.log" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.852603 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29415421-7jkhn_8a7da00b-061e-4af6-883c-57fd7deb39e1/keystone-cron/0.log" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.859139 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c72bd\" (UniqueName: \"kubernetes.io/projected/5dd21289-8c0e-4763-923a-2adeeb3ef17b-kube-api-access-c72bd\") pod \"redhat-marketplace-nbcpx\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:48 crc kubenswrapper[4780]: I1205 09:38:48.917688 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.129828 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_fa6cf62d-3a3f-45f6-82ba-e010b2cdda3d/kube-state-metrics/0.log" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.263523 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-bwvvh_77265a47-156e-4225-9ca8-0cb7000048b3/libvirt-openstack-openstack-cell1/0.log" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.472030 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbcpx"] Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.584970 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8f54l"] Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.588982 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.601702 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8f54l"] Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.638447 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-catalog-content\") pod \"certified-operators-8f54l\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.638543 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-utilities\") pod \"certified-operators-8f54l\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.638614 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw2g7\" (UniqueName: \"kubernetes.io/projected/7e55fbb6-5102-4dca-942e-f30972815db3-kube-api-access-lw2g7\") pod \"certified-operators-8f54l\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.740412 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-utilities\") pod \"certified-operators-8f54l\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.740494 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw2g7\" (UniqueName: \"kubernetes.io/projected/7e55fbb6-5102-4dca-942e-f30972815db3-kube-api-access-lw2g7\") pod \"certified-operators-8f54l\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.740604 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-catalog-content\") pod \"certified-operators-8f54l\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.740911 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-utilities\") pod \"certified-operators-8f54l\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.741075 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-catalog-content\") pod \"certified-operators-8f54l\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.768865 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw2g7\" (UniqueName: \"kubernetes.io/projected/7e55fbb6-5102-4dca-942e-f30972815db3-kube-api-access-lw2g7\") pod \"certified-operators-8f54l\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.794808 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5f4c59d686-tl2g6_e4f8b205-64a6-4f1d-a468-c5e4e399de9a/keystone-api/0.log" Dec 05 09:38:49 crc kubenswrapper[4780]: I1205 09:38:49.969547 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:50 crc kubenswrapper[4780]: I1205 09:38:50.201993 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-745d445d4c-g6fcv_940ad4c5-eea8-4c74-af2b-475201d54bc4/neutron-httpd/0.log" Dec 05 09:38:50 crc kubenswrapper[4780]: I1205 09:38:50.288747 4780 generic.go:334] "Generic (PLEG): container finished" podID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerID="59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0" exitCode=0 Dec 05 09:38:50 crc kubenswrapper[4780]: I1205 09:38:50.289067 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbcpx" event={"ID":"5dd21289-8c0e-4763-923a-2adeeb3ef17b","Type":"ContainerDied","Data":"59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0"} Dec 05 09:38:50 crc kubenswrapper[4780]: I1205 09:38:50.289093 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbcpx" event={"ID":"5dd21289-8c0e-4763-923a-2adeeb3ef17b","Type":"ContainerStarted","Data":"b402c43424fd7958693dad1d4a1d727e606522749cc5e1429d88ca105a311b5d"} Dec 05 09:38:50 crc kubenswrapper[4780]: I1205 09:38:50.486558 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-745d445d4c-g6fcv_940ad4c5-eea8-4c74-af2b-475201d54bc4/neutron-api/0.log" Dec 05 09:38:50 crc kubenswrapper[4780]: I1205 09:38:50.638769 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-r2mdf_4ea9a8f2-904c-45b8-9e1a-72a0780a003a/neutron-dhcp-openstack-openstack-cell1/0.log" Dec 05 09:38:50 crc kubenswrapper[4780]: W1205 09:38:50.678199 4780 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e55fbb6_5102_4dca_942e_f30972815db3.slice/crio-088f079b3e463340f09a97d6fa9767c864d4d57acce099e27733fd60bea29379 WatchSource:0}: Error finding container 088f079b3e463340f09a97d6fa9767c864d4d57acce099e27733fd60bea29379: Status 404 returned error can't find the container with id 088f079b3e463340f09a97d6fa9767c864d4d57acce099e27733fd60bea29379 Dec 05 09:38:50 crc kubenswrapper[4780]: I1205 09:38:50.686955 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8f54l"] Dec 05 09:38:50 crc kubenswrapper[4780]: I1205 09:38:50.847024 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-xrklf_99f4a150-ad95-40fe-b058-3cf6d49aed23/neutron-metadata-openstack-openstack-cell1/0.log" Dec 05 09:38:51 crc kubenswrapper[4780]: I1205 09:38:51.087762 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-mghjq_63f6d5db-07fa-40e2-9efa-b29f78bbfe3b/neutron-sriov-openstack-openstack-cell1/0.log" Dec 05 09:38:51 crc kubenswrapper[4780]: I1205 09:38:51.140147 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:38:51 crc kubenswrapper[4780]: E1205 09:38:51.140850 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:38:51 crc kubenswrapper[4780]: I1205 09:38:51.305365 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbcpx" event={"ID":"5dd21289-8c0e-4763-923a-2adeeb3ef17b","Type":"ContainerStarted","Data":"b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72"} Dec 05 09:38:51 crc kubenswrapper[4780]: I1205 09:38:51.308393 4780 generic.go:334] "Generic (PLEG): container finished" podID="7e55fbb6-5102-4dca-942e-f30972815db3" containerID="6dbe25e940a86ffb5c88fbcdd069ad1c0e0ec58dea98e9165f5e475fd2ebf09b" exitCode=0 Dec 05 09:38:51 crc kubenswrapper[4780]: I1205 09:38:51.308471 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f54l" event={"ID":"7e55fbb6-5102-4dca-942e-f30972815db3","Type":"ContainerDied","Data":"6dbe25e940a86ffb5c88fbcdd069ad1c0e0ec58dea98e9165f5e475fd2ebf09b"} Dec 05 09:38:51 crc kubenswrapper[4780]: I1205 09:38:51.308505 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f54l" event={"ID":"7e55fbb6-5102-4dca-942e-f30972815db3","Type":"ContainerStarted","Data":"088f079b3e463340f09a97d6fa9767c864d4d57acce099e27733fd60bea29379"} Dec 05 09:38:51 crc kubenswrapper[4780]: I1205 09:38:51.524188 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_8864df53-14ca-40d5-9200-18c54f92600f/nova-cell0-conductor-conductor/0.log" Dec 05 09:38:51 crc kubenswrapper[4780]: I1205 09:38:51.637281 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_313f56bf-7b78-4af8-bdce-b386cba8dfcb/nova-api-log/0.log" Dec 05 09:38:51 crc kubenswrapper[4780]: I1205 09:38:51.866024 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_313f56bf-7b78-4af8-bdce-b386cba8dfcb/nova-api-api/0.log" Dec 05 09:38:51 crc kubenswrapper[4780]: I1205 09:38:51.896622 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c170a703-f3e7-409c-8102-9ad7915e513c/nova-cell1-conductor-conductor/0.log" Dec 05 09:38:52 crc kubenswrapper[4780]: I1205 09:38:52.042438 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e85de255-98ca-4e1b-8a26-96597ae078aa/nova-cell1-novncproxy-novncproxy/0.log" Dec 05 09:38:52 crc kubenswrapper[4780]: I1205 09:38:52.192414 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellv5kr4_c38ee0d1-13fd-48a8-8e0c-e7864d35d3fb/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Dec 05 09:38:52 crc kubenswrapper[4780]: I1205 09:38:52.327222 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f54l" event={"ID":"7e55fbb6-5102-4dca-942e-f30972815db3","Type":"ContainerStarted","Data":"23f1bb834dcc443a3eacfabe335bfbe0cccca26033d031cc907ab1c6b5deb376"} Dec 05 09:38:52 crc kubenswrapper[4780]: I1205 09:38:52.334126 4780 generic.go:334] "Generic (PLEG): container finished" podID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerID="b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72" exitCode=0 Dec 05 09:38:52 crc kubenswrapper[4780]: I1205 09:38:52.334184 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbcpx" event={"ID":"5dd21289-8c0e-4763-923a-2adeeb3ef17b","Type":"ContainerDied","Data":"b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72"} Dec 05 09:38:52 crc kubenswrapper[4780]: I1205 09:38:52.411541 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-mjwl8_3fc60cde-15e4-44b7-a344-60f6420d9374/nova-cell1-openstack-openstack-cell1/0.log" Dec 05 09:38:52 crc kubenswrapper[4780]: I1205 09:38:52.559593 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_286873d6-e3e1-4a17-b5b1-1697e5bcc61e/nova-metadata-log/0.log" Dec 05 09:38:52 crc kubenswrapper[4780]: I1205 09:38:52.877454 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1b7813ed-7cb9-4c31-a3fd-edce76fdb1e0/nova-scheduler-scheduler/0.log" Dec 05 09:38:52 crc kubenswrapper[4780]: I1205 09:38:52.993870 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_39137b68-813b-4d2a-b543-730c76488431/mysql-bootstrap/0.log" Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.198623 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_39137b68-813b-4d2a-b543-730c76488431/galera/0.log" Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.231155 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_39137b68-813b-4d2a-b543-730c76488431/mysql-bootstrap/0.log" Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.299972 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_286873d6-e3e1-4a17-b5b1-1697e5bcc61e/nova-metadata-metadata/0.log" Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.349244 4780 generic.go:334] "Generic (PLEG): container finished" podID="7e55fbb6-5102-4dca-942e-f30972815db3" containerID="23f1bb834dcc443a3eacfabe335bfbe0cccca26033d031cc907ab1c6b5deb376" exitCode=0 Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.349300 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f54l" event={"ID":"7e55fbb6-5102-4dca-942e-f30972815db3","Type":"ContainerDied","Data":"23f1bb834dcc443a3eacfabe335bfbe0cccca26033d031cc907ab1c6b5deb376"} Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.352290 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbcpx" event={"ID":"5dd21289-8c0e-4763-923a-2adeeb3ef17b","Type":"ContainerStarted","Data":"b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c"} Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.394494 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nbcpx" podStartSLOduration=2.623453642 podStartE2EDuration="5.394472988s" podCreationTimestamp="2025-12-05 09:38:48 +0000 UTC" firstStartedPulling="2025-12-05 09:38:50.31225788 +0000 UTC m=+10364.381774212" lastFinishedPulling="2025-12-05 09:38:53.083277226 +0000 UTC m=+10367.152793558" observedRunningTime="2025-12-05 09:38:53.39230385 +0000 UTC m=+10367.461820192" watchObservedRunningTime="2025-12-05 09:38:53.394472988 +0000 UTC m=+10367.463989320" Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.428605 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ff360368-93b6-4ab0-b1d6-e53682ec9336/mysql-bootstrap/0.log" Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.615182 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ff360368-93b6-4ab0-b1d6-e53682ec9336/mysql-bootstrap/0.log" Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.700594 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f25089e3-336d-4e60-a932-2f027ad4d516/openstackclient/0.log" Dec 05 09:38:53 crc kubenswrapper[4780]: I1205 09:38:53.702268 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ff360368-93b6-4ab0-b1d6-e53682ec9336/galera/0.log" Dec 05 09:38:54 crc kubenswrapper[4780]: I1205 09:38:54.123349 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b6ee248-3deb-4c44-962b-6c6e174c9b68/openstack-network-exporter/0.log" Dec 05 09:38:54 crc kubenswrapper[4780]: I1205 09:38:54.273094 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b6ee248-3deb-4c44-962b-6c6e174c9b68/ovn-northd/0.log" Dec 05 09:38:54 crc kubenswrapper[4780]: I1205 09:38:54.364633 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f54l" event={"ID":"7e55fbb6-5102-4dca-942e-f30972815db3","Type":"ContainerStarted","Data":"da163111deef4417f76bc17f2656355818e5525548064ea0351b800e8c38c872"} Dec 05 09:38:54 crc kubenswrapper[4780]: I1205 09:38:54.390762 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8f54l" podStartSLOduration=2.95323526 podStartE2EDuration="5.390744553s" podCreationTimestamp="2025-12-05 09:38:49 +0000 UTC" firstStartedPulling="2025-12-05 09:38:51.310520778 +0000 UTC m=+10365.380037110" lastFinishedPulling="2025-12-05 09:38:53.748030071 +0000 UTC m=+10367.817546403" observedRunningTime="2025-12-05 09:38:54.384382551 +0000 UTC m=+10368.453898883" watchObservedRunningTime="2025-12-05 09:38:54.390744553 +0000 UTC m=+10368.460260885" Dec 05 09:38:54 crc kubenswrapper[4780]: I1205 09:38:54.560571 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2f0472df-b002-413c-afc1-28c9e0101566/openstack-network-exporter/0.log" Dec 05 09:38:54 crc kubenswrapper[4780]: I1205 09:38:54.601549 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-8j825_4406be2e-554b-4325-90c8-1c2764436e70/ovn-openstack-openstack-cell1/0.log" Dec 05 09:38:54 crc kubenswrapper[4780]: I1205 09:38:54.795846 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2f0472df-b002-413c-afc1-28c9e0101566/ovsdbserver-nb/0.log" Dec 05 09:38:54 crc kubenswrapper[4780]: I1205 09:38:54.927300 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_659b842c-8e50-40ac-9b42-de934ff34209/openstack-network-exporter/0.log" Dec 05 09:38:54 crc kubenswrapper[4780]: I1205 09:38:54.990415 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_659b842c-8e50-40ac-9b42-de934ff34209/ovsdbserver-nb/0.log" Dec 05 09:38:55 crc kubenswrapper[4780]: I1205 09:38:55.212438 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_b17ee026-2831-4330-a9ff-92edb8901c90/openstack-network-exporter/0.log" Dec 05 09:38:55 crc kubenswrapper[4780]: I1205 09:38:55.229012 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_b17ee026-2831-4330-a9ff-92edb8901c90/ovsdbserver-nb/0.log" Dec 05 09:38:55 crc kubenswrapper[4780]: I1205 09:38:55.488244 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0c6417d4-b13a-49fb-84fd-8b8b694fe781/ovsdbserver-sb/0.log" Dec 05 09:38:55 crc kubenswrapper[4780]: I1205 09:38:55.500994 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0c6417d4-b13a-49fb-84fd-8b8b694fe781/openstack-network-exporter/0.log" Dec 05 09:38:55 crc kubenswrapper[4780]: I1205 09:38:55.550679 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_24170560-10d5-4ffe-b699-0cf14104ef10/openstack-network-exporter/0.log" Dec 05 09:38:55 crc kubenswrapper[4780]: I1205 09:38:55.756068 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_24170560-10d5-4ffe-b699-0cf14104ef10/ovsdbserver-sb/0.log" Dec 05 09:38:55 crc kubenswrapper[4780]: I1205 09:38:55.787159 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_1cd288c0-8daf-49b6-8150-0e20c2cd58f0/openstack-network-exporter/0.log" Dec 05 09:38:55 crc kubenswrapper[4780]: I1205 09:38:55.917061 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_1cd288c0-8daf-49b6-8150-0e20c2cd58f0/ovsdbserver-sb/0.log" Dec 05 09:38:56 crc kubenswrapper[4780]: I1205 09:38:56.207069 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d948d4648-j8s68_ad846df1-a795-4eb2-a063-e7f92b916f78/placement-api/0.log" Dec 05 09:38:56 crc kubenswrapper[4780]: I1205 09:38:56.273827 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d948d4648-j8s68_ad846df1-a795-4eb2-a063-e7f92b916f78/placement-log/0.log" Dec 05 09:38:56 crc kubenswrapper[4780]: I1205 09:38:56.298096 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cphbps_77271347-a925-4f54-84d2-97489c85a5bc/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Dec 05 09:38:56 crc kubenswrapper[4780]: I1205 09:38:56.483908 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5/init-config-reloader/0.log" Dec 05 09:38:56 crc kubenswrapper[4780]: I1205 09:38:56.637626 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5/init-config-reloader/0.log" Dec 05 09:38:56 crc kubenswrapper[4780]: I1205 09:38:56.729556 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5/config-reloader/0.log" Dec 05 09:38:56 crc kubenswrapper[4780]: I1205 09:38:56.758301 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5/thanos-sidecar/0.log" Dec 05 09:38:56 crc kubenswrapper[4780]: I1205 09:38:56.758428 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ca2c18fc-c708-43b7-bdbc-cc1cb92fb1d5/prometheus/0.log" Dec 05 09:38:56 crc kubenswrapper[4780]: I1205 09:38:56.963019 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bcc59214-4e52-4392-bf7d-240a70c0326b/setup-container/0.log" Dec 05 09:38:57 crc kubenswrapper[4780]: I1205 09:38:57.144966 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bcc59214-4e52-4392-bf7d-240a70c0326b/setup-container/0.log" Dec 05 09:38:57 crc kubenswrapper[4780]: I1205 09:38:57.193098 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5ccf6f95-5684-468d-a08e-dd0fa0e92c35/setup-container/0.log" Dec 05 09:38:57 crc kubenswrapper[4780]: I1205 09:38:57.228734 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_bcc59214-4e52-4392-bf7d-240a70c0326b/rabbitmq/0.log" Dec 05 09:38:57 crc kubenswrapper[4780]: I1205 09:38:57.442236 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5ccf6f95-5684-468d-a08e-dd0fa0e92c35/setup-container/0.log" Dec 05 09:38:57 crc kubenswrapper[4780]: I1205 09:38:57.562680 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-hkz6x_942cc838-8886-4800-b03e-5e286e6700c0/reboot-os-openstack-openstack-cell1/0.log" Dec 05 09:38:57 crc kubenswrapper[4780]: I1205 09:38:57.716023 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5ccf6f95-5684-468d-a08e-dd0fa0e92c35/rabbitmq/0.log" Dec 05 09:38:57 crc kubenswrapper[4780]: I1205 09:38:57.761366 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-nv2qj_34e20bd1-a448-4d66-b6eb-8d7b2f7108c6/run-os-openstack-openstack-cell1/0.log" Dec 05 09:38:58 crc kubenswrapper[4780]: I1205 09:38:58.105364 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-zvpbm_0bb77d44-7251-43b1-864b-aa98ab803837/ssh-known-hosts-openstack/0.log" Dec 05 09:38:58 crc kubenswrapper[4780]: I1205 09:38:58.309682 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-56fff8bbb4-tvswm_aca38aa6-7f9c-470a-a24d-80def95f09f7/proxy-server/0.log" Dec 05 09:38:58 crc kubenswrapper[4780]: I1205 09:38:58.469978 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-56fff8bbb4-tvswm_aca38aa6-7f9c-470a-a24d-80def95f09f7/proxy-httpd/0.log" Dec 05 09:38:58 crc kubenswrapper[4780]: I1205 09:38:58.486757 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-49kwx_42b47e2e-4b29-4148-a992-34d12379b270/swift-ring-rebalance/0.log" Dec 05 09:38:58 crc kubenswrapper[4780]: I1205 09:38:58.917911 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:58 crc kubenswrapper[4780]: I1205 09:38:58.917974 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:58 crc kubenswrapper[4780]: I1205 09:38:58.953946 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8b3db936-5d89-4bde-8f47-5740a6bb4b93/test-operator-logs-container/0.log" Dec 05 09:38:58 crc kubenswrapper[4780]: I1205 09:38:58.974571 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:59 crc kubenswrapper[4780]: I1205 09:38:59.329191 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-2f7pf_47fba3c8-cdfe-4395-a921-933521a08de8/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Dec 05 09:38:59 crc kubenswrapper[4780]: I1205 09:38:59.342486 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-88kbg_80fafa83-0a64-48b0-9bd9-a5c59a344b8d/telemetry-openstack-openstack-cell1/0.log" Dec 05 09:38:59 crc kubenswrapper[4780]: I1205 09:38:59.364376 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5d2ce5fa-2138-48bd-9af7-76d136e21dfe/tempest-tests-tempest-tests-runner/0.log" Dec 05 09:38:59 crc kubenswrapper[4780]: I1205 09:38:59.482173 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:38:59 crc kubenswrapper[4780]: I1205 09:38:59.528715 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-dp96t_1df681e3-86d2-4b66-9dc8-52ce67e207dc/validate-network-openstack-openstack-cell1/0.log" Dec 05 09:38:59 crc kubenswrapper[4780]: I1205 09:38:59.970201 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:38:59 crc kubenswrapper[4780]: I1205 09:38:59.970251 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:39:00 crc kubenswrapper[4780]: I1205 09:39:00.025191 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:39:00 crc kubenswrapper[4780]: I1205 09:39:00.474353 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:39:00 crc kubenswrapper[4780]: I1205 09:39:00.556045 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbcpx"] Dec 05 09:39:01 crc kubenswrapper[4780]: I1205 09:39:01.428640 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nbcpx" podUID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerName="registry-server" containerID="cri-o://b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c" gracePeriod=2 Dec 05 09:39:01 crc kubenswrapper[4780]: I1205 09:39:01.913638 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.051006 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-utilities\") pod \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.051127 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c72bd\" (UniqueName: \"kubernetes.io/projected/5dd21289-8c0e-4763-923a-2adeeb3ef17b-kube-api-access-c72bd\") pod \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.051164 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-catalog-content\") pod \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\" (UID: \"5dd21289-8c0e-4763-923a-2adeeb3ef17b\") " Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.051953 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-utilities" (OuterVolumeSpecName: "utilities") pod "5dd21289-8c0e-4763-923a-2adeeb3ef17b" (UID: "5dd21289-8c0e-4763-923a-2adeeb3ef17b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.057322 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd21289-8c0e-4763-923a-2adeeb3ef17b-kube-api-access-c72bd" (OuterVolumeSpecName: "kube-api-access-c72bd") pod "5dd21289-8c0e-4763-923a-2adeeb3ef17b" (UID: "5dd21289-8c0e-4763-923a-2adeeb3ef17b"). InnerVolumeSpecName "kube-api-access-c72bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.072225 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dd21289-8c0e-4763-923a-2adeeb3ef17b" (UID: "5dd21289-8c0e-4763-923a-2adeeb3ef17b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.138576 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:39:02 crc kubenswrapper[4780]: E1205 09:39:02.138954 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.153272 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.153299 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c72bd\" (UniqueName: \"kubernetes.io/projected/5dd21289-8c0e-4763-923a-2adeeb3ef17b-kube-api-access-c72bd\") on node \"crc\" DevicePath \"\"" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.153309 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd21289-8c0e-4763-923a-2adeeb3ef17b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.361615 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8f54l"] Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.454896 4780 generic.go:334] "Generic (PLEG): container finished" podID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerID="b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c" exitCode=0 Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.455125 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8f54l" podUID="7e55fbb6-5102-4dca-942e-f30972815db3" containerName="registry-server" containerID="cri-o://da163111deef4417f76bc17f2656355818e5525548064ea0351b800e8c38c872" gracePeriod=2 Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.455265 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbcpx" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.455946 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbcpx" event={"ID":"5dd21289-8c0e-4763-923a-2adeeb3ef17b","Type":"ContainerDied","Data":"b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c"} Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.456061 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbcpx" event={"ID":"5dd21289-8c0e-4763-923a-2adeeb3ef17b","Type":"ContainerDied","Data":"b402c43424fd7958693dad1d4a1d727e606522749cc5e1429d88ca105a311b5d"} Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.456084 4780 scope.go:117] "RemoveContainer" containerID="b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.489981 4780 scope.go:117] "RemoveContainer" containerID="b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.502242 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbcpx"] Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.510470 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbcpx"] Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.540631 4780 scope.go:117] "RemoveContainer" containerID="59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.637597 4780 scope.go:117] "RemoveContainer" containerID="b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c" Dec 05 09:39:02 crc kubenswrapper[4780]: E1205 09:39:02.638662 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c\": container with ID starting with b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c not found: ID does not exist" containerID="b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.638706 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c"} err="failed to get container status \"b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c\": rpc error: code = NotFound desc = could not find container \"b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c\": container with ID starting with b8db63f0efe0a0ef9c1bb1b78f90273ca25ebf27439bc1ad38483b0f0790216c not found: ID does not exist" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.638736 4780 scope.go:117] "RemoveContainer" containerID="b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72" Dec 05 09:39:02 crc kubenswrapper[4780]: E1205 09:39:02.639038 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72\": container with ID starting with b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72 not found: ID does not exist" containerID="b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.639081 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72"} err="failed to get container status \"b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72\": rpc error: code = NotFound desc = could not find container \"b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72\": container with ID starting with b05089b357ed2fde836cda17f7fe06533f15eea103bbc44a0f6712338cf29b72 not found: ID does not exist" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.639111 4780 scope.go:117] "RemoveContainer" containerID="59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0" Dec 05 09:39:02 crc kubenswrapper[4780]: E1205 09:39:02.639569 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0\": container with ID starting with 59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0 not found: ID does not exist" containerID="59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0" Dec 05 09:39:02 crc kubenswrapper[4780]: I1205 09:39:02.639598 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0"} err="failed to get container status \"59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0\": rpc error: code = NotFound desc = could not find container \"59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0\": container with ID starting with 59ae5ceaad01b0ecc3655f64fd56b97ceb1cfc4f018256f674d32b726428a8e0 not found: ID does not exist" Dec 05 09:39:03 crc kubenswrapper[4780]: I1205 09:39:03.470655 4780 generic.go:334] "Generic (PLEG): container finished" podID="7e55fbb6-5102-4dca-942e-f30972815db3" containerID="da163111deef4417f76bc17f2656355818e5525548064ea0351b800e8c38c872" exitCode=0 Dec 05 09:39:03 crc kubenswrapper[4780]: I1205 09:39:03.470794 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f54l" event={"ID":"7e55fbb6-5102-4dca-942e-f30972815db3","Type":"ContainerDied","Data":"da163111deef4417f76bc17f2656355818e5525548064ea0351b800e8c38c872"} Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.003234 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.149570 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" path="/var/lib/kubelet/pods/5dd21289-8c0e-4763-923a-2adeeb3ef17b/volumes" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.187023 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-utilities\") pod \"7e55fbb6-5102-4dca-942e-f30972815db3\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.187127 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw2g7\" (UniqueName: \"kubernetes.io/projected/7e55fbb6-5102-4dca-942e-f30972815db3-kube-api-access-lw2g7\") pod \"7e55fbb6-5102-4dca-942e-f30972815db3\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.187352 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-catalog-content\") pod \"7e55fbb6-5102-4dca-942e-f30972815db3\" (UID: \"7e55fbb6-5102-4dca-942e-f30972815db3\") " Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.188119 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-utilities" (OuterVolumeSpecName: "utilities") pod "7e55fbb6-5102-4dca-942e-f30972815db3" (UID: "7e55fbb6-5102-4dca-942e-f30972815db3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.193260 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.193516 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e55fbb6-5102-4dca-942e-f30972815db3-kube-api-access-lw2g7" (OuterVolumeSpecName: "kube-api-access-lw2g7") pod "7e55fbb6-5102-4dca-942e-f30972815db3" (UID: "7e55fbb6-5102-4dca-942e-f30972815db3"). InnerVolumeSpecName "kube-api-access-lw2g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.244012 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e55fbb6-5102-4dca-942e-f30972815db3" (UID: "7e55fbb6-5102-4dca-942e-f30972815db3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.294890 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw2g7\" (UniqueName: \"kubernetes.io/projected/7e55fbb6-5102-4dca-942e-f30972815db3-kube-api-access-lw2g7\") on node \"crc\" DevicePath \"\"" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.294925 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e55fbb6-5102-4dca-942e-f30972815db3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.487348 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8f54l" event={"ID":"7e55fbb6-5102-4dca-942e-f30972815db3","Type":"ContainerDied","Data":"088f079b3e463340f09a97d6fa9767c864d4d57acce099e27733fd60bea29379"} Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.488441 4780 scope.go:117] "RemoveContainer" containerID="da163111deef4417f76bc17f2656355818e5525548064ea0351b800e8c38c872" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.488216 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8f54l" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.525813 4780 scope.go:117] "RemoveContainer" containerID="23f1bb834dcc443a3eacfabe335bfbe0cccca26033d031cc907ab1c6b5deb376" Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.536845 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8f54l"] Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.548747 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8f54l"] Dec 05 09:39:04 crc kubenswrapper[4780]: I1205 09:39:04.551682 4780 scope.go:117] "RemoveContainer" containerID="6dbe25e940a86ffb5c88fbcdd069ad1c0e0ec58dea98e9165f5e475fd2ebf09b" Dec 05 09:39:06 crc kubenswrapper[4780]: I1205 09:39:06.157187 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e55fbb6-5102-4dca-942e-f30972815db3" path="/var/lib/kubelet/pods/7e55fbb6-5102-4dca-942e-f30972815db3/volumes" Dec 05 09:39:16 crc kubenswrapper[4780]: I1205 09:39:16.145258 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:39:16 crc kubenswrapper[4780]: E1205 09:39:16.145960 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:39:16 crc kubenswrapper[4780]: I1205 09:39:16.151787 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_28c190e9-56f3-48c7-a072-4052688197f5/memcached/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.034090 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn_823cbb49-bcc2-45fe-9bbf-505a61b7eecc/util/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.235512 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn_823cbb49-bcc2-45fe-9bbf-505a61b7eecc/util/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.237977 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn_823cbb49-bcc2-45fe-9bbf-505a61b7eecc/pull/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.285259 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn_823cbb49-bcc2-45fe-9bbf-505a61b7eecc/pull/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.448664 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn_823cbb49-bcc2-45fe-9bbf-505a61b7eecc/util/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.459835 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn_823cbb49-bcc2-45fe-9bbf-505a61b7eecc/pull/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.473136 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafg6kfn_823cbb49-bcc2-45fe-9bbf-505a61b7eecc/extract/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.636509 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vjqff_e4d45329-536a-48cf-932c-22669f486a7c/kube-rbac-proxy/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.694550 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-7fj5f_d413e91e-0735-412e-8614-bd86a466267b/kube-rbac-proxy/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.749281 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-vjqff_e4d45329-536a-48cf-932c-22669f486a7c/manager/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.898620 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-7fj5f_d413e91e-0735-412e-8614-bd86a466267b/manager/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.951035 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-4vfh9_d3c6c892-e943-45a9-bda7-63fbae6bc3c1/kube-rbac-proxy/0.log" Dec 05 09:39:27 crc kubenswrapper[4780]: I1205 09:39:27.962285 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-4vfh9_d3c6c892-e943-45a9-bda7-63fbae6bc3c1/manager/0.log" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.108770 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-kf66r_12b890b6-660a-4a2f-a2cf-2cbce76cafc6/kube-rbac-proxy/0.log" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.138932 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:39:28 crc kubenswrapper[4780]: E1205 09:39:28.139355 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.278586 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-kf66r_12b890b6-660a-4a2f-a2cf-2cbce76cafc6/manager/0.log" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.329649 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-9zlhl_fb14f9a1-3b2e-4c17-a750-b1188fff5b40/kube-rbac-proxy/0.log" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.345294 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-9zlhl_fb14f9a1-3b2e-4c17-a750-b1188fff5b40/manager/0.log" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.469472 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-6j9rd_2f86d305-a39d-42ec-9a73-067610752615/kube-rbac-proxy/0.log" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.518698 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-6j9rd_2f86d305-a39d-42ec-9a73-067610752615/manager/0.log" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.620899 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-9ntxt_edd42acf-bc82-40b8-bae3-c8ce3f8dcd54/kube-rbac-proxy/0.log" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.853497 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-7f7qk_8450df13-5e1a-4f4e-86dc-b1c841845554/manager/0.log" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.870018 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-7f7qk_8450df13-5e1a-4f4e-86dc-b1c841845554/kube-rbac-proxy/0.log" Dec 05 09:39:28 crc kubenswrapper[4780]: I1205 09:39:28.982934 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-9ntxt_edd42acf-bc82-40b8-bae3-c8ce3f8dcd54/manager/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.032517 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-5pn44_b819f602-ddd5-4a16-b998-6d7d78798681/kube-rbac-proxy/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.184996 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-5pn44_b819f602-ddd5-4a16-b998-6d7d78798681/manager/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.200891 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-mmn9w_972a9e29-9c48-4d8e-9390-e91c9b422af8/kube-rbac-proxy/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.236321 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-mmn9w_972a9e29-9c48-4d8e-9390-e91c9b422af8/manager/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.364961 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-2nh4k_7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9/kube-rbac-proxy/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.411510 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-2nh4k_7f1b2ca9-ee98-43a5-8346-dfa6a59d03c9/manager/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.574465 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-pw8g2_1f3fb0ee-0381-48df-91b7-1a72bf5acd62/kube-rbac-proxy/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.670404 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-pw8g2_1f3fb0ee-0381-48df-91b7-1a72bf5acd62/manager/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.692460 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-7plfw_063c7b0a-5211-4356-9452-e55deeeeb834/kube-rbac-proxy/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.928372 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-jfzgd_c032b9cc-5da5-4011-a397-e564fedcf04d/kube-rbac-proxy/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.931516 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-jfzgd_c032b9cc-5da5-4011-a397-e564fedcf04d/manager/0.log" Dec 05 09:39:29 crc kubenswrapper[4780]: I1205 09:39:29.957080 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-7plfw_063c7b0a-5211-4356-9452-e55deeeeb834/manager/0.log" Dec 05 09:39:30 crc kubenswrapper[4780]: I1205 09:39:30.119402 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-55c85496f5mb6tl_7d2e66f9-622c-4723-abd6-d1d9689ac660/kube-rbac-proxy/0.log" Dec 05 09:39:30 crc kubenswrapper[4780]: I1205 09:39:30.121002 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-55c85496f5mb6tl_7d2e66f9-622c-4723-abd6-d1d9689ac660/manager/0.log" Dec 05 09:39:30 crc kubenswrapper[4780]: I1205 09:39:30.467582 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-55b6fb9447-qt9q2_f138461d-58f8-48a4-a936-f5295892bdcd/operator/0.log" Dec 05 09:39:30 crc kubenswrapper[4780]: I1205 09:39:30.690723 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-xbdxn_891b32d7-a00e-4aee-b1a9-11a17e231cf1/kube-rbac-proxy/0.log" Dec 05 09:39:30 crc kubenswrapper[4780]: I1205 09:39:30.809834 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2jm4g_7ec28974-de86-4b92-8635-e0ec75f5d605/registry-server/0.log" Dec 05 09:39:30 crc kubenswrapper[4780]: I1205 09:39:30.984356 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-xbdxn_891b32d7-a00e-4aee-b1a9-11a17e231cf1/manager/0.log" Dec 05 09:39:30 crc kubenswrapper[4780]: I1205 09:39:30.988649 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-n7pzh_4f817d18-c711-48ff-891e-f5f59fe1ec5f/kube-rbac-proxy/0.log" Dec 05 09:39:31 crc kubenswrapper[4780]: I1205 09:39:31.103350 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-n7pzh_4f817d18-c711-48ff-891e-f5f59fe1ec5f/manager/0.log" Dec 05 09:39:31 crc kubenswrapper[4780]: I1205 09:39:31.222153 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xmn7v_908cf347-4346-4eb1-996f-b214491207e0/operator/0.log" Dec 05 09:39:31 crc kubenswrapper[4780]: I1205 09:39:31.342174 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-sl2j8_b2f9a2dc-4b04-4209-a427-1467873d3d19/kube-rbac-proxy/0.log" Dec 05 09:39:31 crc kubenswrapper[4780]: I1205 09:39:31.449243 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-sl2j8_b2f9a2dc-4b04-4209-a427-1467873d3d19/manager/0.log" Dec 05 09:39:31 crc kubenswrapper[4780]: I1205 09:39:31.554010 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-r5j4j_0504ce62-63ec-4224-883e-495a8de219a6/kube-rbac-proxy/0.log" Dec 05 09:39:31 crc kubenswrapper[4780]: I1205 09:39:31.696787 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-gsggt_1af7f32d-6c1b-4ed2-8511-4ca770bba111/kube-rbac-proxy/0.log" Dec 05 09:39:31 crc kubenswrapper[4780]: I1205 09:39:31.791225 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-gsggt_1af7f32d-6c1b-4ed2-8511-4ca770bba111/manager/0.log" Dec 05 09:39:31 crc kubenswrapper[4780]: I1205 09:39:31.837278 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-r5j4j_0504ce62-63ec-4224-883e-495a8de219a6/manager/0.log" Dec 05 09:39:31 crc kubenswrapper[4780]: I1205 09:39:31.998567 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-dr6pg_9bf061f5-1016-4813-aad6-b50350f6a1c5/kube-rbac-proxy/0.log" Dec 05 09:39:32 crc kubenswrapper[4780]: I1205 09:39:32.043420 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-dr6pg_9bf061f5-1016-4813-aad6-b50350f6a1c5/manager/0.log" Dec 05 09:39:32 crc kubenswrapper[4780]: I1205 09:39:32.784349 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54bdf956c4-zslsd_02c9c751-c299-4ff8-9c2d-200aae3ea2ba/manager/0.log" Dec 05 09:39:43 crc kubenswrapper[4780]: I1205 09:39:43.139040 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:39:43 crc kubenswrapper[4780]: E1205 09:39:43.140806 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:39:48 crc kubenswrapper[4780]: I1205 09:39:48.984619 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-q7mwk_471abb3f-f9ef-454a-8f00-87c4846e59f2/control-plane-machine-set-operator/0.log" Dec 05 09:39:49 crc kubenswrapper[4780]: I1205 09:39:49.118595 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tjqz5_0299e11c-ff9b-4b45-826b-5289efbfbef8/kube-rbac-proxy/0.log" Dec 05 09:39:49 crc kubenswrapper[4780]: I1205 09:39:49.158040 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-tjqz5_0299e11c-ff9b-4b45-826b-5289efbfbef8/machine-api-operator/0.log" Dec 05 09:39:54 crc kubenswrapper[4780]: I1205 09:39:54.139759 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:39:54 crc kubenswrapper[4780]: E1205 09:39:54.140663 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:40:00 crc kubenswrapper[4780]: I1205 09:40:00.422171 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-pzxsq_cd0edece-4073-4ad7-8d5f-1a42fa0e9cf2/cert-manager-controller/0.log" Dec 05 09:40:00 crc kubenswrapper[4780]: I1205 09:40:00.535437 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-z7nlh_e91b7bb5-7ce5-41f3-bb4d-1723a2f39e65/cert-manager-cainjector/0.log" Dec 05 09:40:00 crc kubenswrapper[4780]: I1205 09:40:00.577003 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-72gg4_37c07285-acc8-44f4-8fc3-fe29fa38cea2/cert-manager-webhook/0.log" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.154484 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-856st"] Dec 05 09:40:07 crc kubenswrapper[4780]: E1205 09:40:07.156606 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e55fbb6-5102-4dca-942e-f30972815db3" containerName="registry-server" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.156631 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e55fbb6-5102-4dca-942e-f30972815db3" containerName="registry-server" Dec 05 09:40:07 crc kubenswrapper[4780]: E1205 09:40:07.156653 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerName="extract-utilities" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.156661 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerName="extract-utilities" Dec 05 09:40:07 crc kubenswrapper[4780]: E1205 09:40:07.156679 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerName="extract-content" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.156687 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerName="extract-content" Dec 05 09:40:07 crc kubenswrapper[4780]: E1205 09:40:07.156709 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e55fbb6-5102-4dca-942e-f30972815db3" containerName="extract-content" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.156716 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e55fbb6-5102-4dca-942e-f30972815db3" containerName="extract-content" Dec 05 09:40:07 crc kubenswrapper[4780]: E1205 09:40:07.156738 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e55fbb6-5102-4dca-942e-f30972815db3" containerName="extract-utilities" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.156745 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e55fbb6-5102-4dca-942e-f30972815db3" containerName="extract-utilities" Dec 05 09:40:07 crc kubenswrapper[4780]: E1205 09:40:07.156776 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerName="registry-server" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.156783 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerName="registry-server" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.157056 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e55fbb6-5102-4dca-942e-f30972815db3" containerName="registry-server" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.157078 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd21289-8c0e-4763-923a-2adeeb3ef17b" containerName="registry-server" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.161366 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.174435 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-856st"] Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.220421 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-catalog-content\") pod \"community-operators-856st\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.220536 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw6gb\" (UniqueName: \"kubernetes.io/projected/c19aec1e-d990-4682-b4c6-9464e39e1eab-kube-api-access-kw6gb\") pod \"community-operators-856st\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.220581 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-utilities\") pod \"community-operators-856st\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.322905 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-catalog-content\") pod \"community-operators-856st\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.323064 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw6gb\" (UniqueName: \"kubernetes.io/projected/c19aec1e-d990-4682-b4c6-9464e39e1eab-kube-api-access-kw6gb\") pod \"community-operators-856st\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.323123 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-utilities\") pod \"community-operators-856st\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.323526 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-catalog-content\") pod \"community-operators-856st\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.323679 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-utilities\") pod \"community-operators-856st\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.343287 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw6gb\" (UniqueName: \"kubernetes.io/projected/c19aec1e-d990-4682-b4c6-9464e39e1eab-kube-api-access-kw6gb\") pod \"community-operators-856st\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:07 crc kubenswrapper[4780]: I1205 09:40:07.489444 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:08 crc kubenswrapper[4780]: I1205 09:40:08.063246 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-856st"] Dec 05 09:40:08 crc kubenswrapper[4780]: I1205 09:40:08.076720 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-856st" event={"ID":"c19aec1e-d990-4682-b4c6-9464e39e1eab","Type":"ContainerStarted","Data":"c98df4854a905235babfbd9a5ddfe894266c376381b394958d6884b131f3138f"} Dec 05 09:40:09 crc kubenswrapper[4780]: I1205 09:40:09.087074 4780 generic.go:334] "Generic (PLEG): container finished" podID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerID="28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7" exitCode=0 Dec 05 09:40:09 crc kubenswrapper[4780]: I1205 09:40:09.087154 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-856st" event={"ID":"c19aec1e-d990-4682-b4c6-9464e39e1eab","Type":"ContainerDied","Data":"28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7"} Dec 05 09:40:09 crc kubenswrapper[4780]: I1205 09:40:09.089149 4780 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 09:40:09 crc kubenswrapper[4780]: I1205 09:40:09.138848 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:40:09 crc kubenswrapper[4780]: E1205 09:40:09.139214 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:40:10 crc kubenswrapper[4780]: I1205 09:40:10.098760 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-856st" event={"ID":"c19aec1e-d990-4682-b4c6-9464e39e1eab","Type":"ContainerStarted","Data":"875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf"} Dec 05 09:40:11 crc kubenswrapper[4780]: I1205 09:40:11.107851 4780 generic.go:334] "Generic (PLEG): container finished" podID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerID="875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf" exitCode=0 Dec 05 09:40:11 crc kubenswrapper[4780]: I1205 09:40:11.107923 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-856st" event={"ID":"c19aec1e-d990-4682-b4c6-9464e39e1eab","Type":"ContainerDied","Data":"875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf"} Dec 05 09:40:12 crc kubenswrapper[4780]: I1205 09:40:12.121265 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-856st" event={"ID":"c19aec1e-d990-4682-b4c6-9464e39e1eab","Type":"ContainerStarted","Data":"4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0"} Dec 05 09:40:12 crc kubenswrapper[4780]: I1205 09:40:12.147060 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-856st" podStartSLOduration=2.723576092 podStartE2EDuration="5.147042007s" podCreationTimestamp="2025-12-05 09:40:07 +0000 UTC" firstStartedPulling="2025-12-05 09:40:09.08897389 +0000 UTC m=+10443.158490212" lastFinishedPulling="2025-12-05 09:40:11.512439795 +0000 UTC m=+10445.581956127" observedRunningTime="2025-12-05 09:40:12.139850664 +0000 UTC m=+10446.209367006" watchObservedRunningTime="2025-12-05 09:40:12.147042007 +0000 UTC m=+10446.216558339" Dec 05 09:40:13 crc kubenswrapper[4780]: I1205 09:40:13.580662 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-tlmcj_d169bdac-5bc1-4dff-9e50-d78c7eff7c37/nmstate-console-plugin/0.log" Dec 05 09:40:13 crc kubenswrapper[4780]: I1205 09:40:13.796654 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qgm26_6fb2fd6c-922a-4b19-9e9c-9ac1bde2bd7f/nmstate-handler/0.log" Dec 05 09:40:13 crc kubenswrapper[4780]: I1205 09:40:13.871292 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-j42vk_cdd4dac7-cd36-48d1-af63-2b40434c6e1c/kube-rbac-proxy/0.log" Dec 05 09:40:13 crc kubenswrapper[4780]: I1205 09:40:13.932991 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-j42vk_cdd4dac7-cd36-48d1-af63-2b40434c6e1c/nmstate-metrics/0.log" Dec 05 09:40:14 crc kubenswrapper[4780]: I1205 09:40:14.058223 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-g2mb7_bf858c19-528a-43ce-bd7c-317f7ad93ac7/nmstate-operator/0.log" Dec 05 09:40:14 crc kubenswrapper[4780]: I1205 09:40:14.119761 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-dx2wx_7e32be6a-c339-464c-94bb-44a5b3cb3224/nmstate-webhook/0.log" Dec 05 09:40:17 crc kubenswrapper[4780]: I1205 09:40:17.490934 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:17 crc kubenswrapper[4780]: I1205 09:40:17.491519 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:17 crc kubenswrapper[4780]: I1205 09:40:17.539678 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:18 crc kubenswrapper[4780]: I1205 09:40:18.223992 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:18 crc kubenswrapper[4780]: I1205 09:40:18.268129 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-856st"] Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.197960 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-856st" podUID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerName="registry-server" containerID="cri-o://4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0" gracePeriod=2 Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.689317 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.812214 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-catalog-content\") pod \"c19aec1e-d990-4682-b4c6-9464e39e1eab\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.812313 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw6gb\" (UniqueName: \"kubernetes.io/projected/c19aec1e-d990-4682-b4c6-9464e39e1eab-kube-api-access-kw6gb\") pod \"c19aec1e-d990-4682-b4c6-9464e39e1eab\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.812477 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-utilities\") pod \"c19aec1e-d990-4682-b4c6-9464e39e1eab\" (UID: \"c19aec1e-d990-4682-b4c6-9464e39e1eab\") " Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.816689 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-utilities" (OuterVolumeSpecName: "utilities") pod "c19aec1e-d990-4682-b4c6-9464e39e1eab" (UID: "c19aec1e-d990-4682-b4c6-9464e39e1eab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.822202 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19aec1e-d990-4682-b4c6-9464e39e1eab-kube-api-access-kw6gb" (OuterVolumeSpecName: "kube-api-access-kw6gb") pod "c19aec1e-d990-4682-b4c6-9464e39e1eab" (UID: "c19aec1e-d990-4682-b4c6-9464e39e1eab"). InnerVolumeSpecName "kube-api-access-kw6gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.872667 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c19aec1e-d990-4682-b4c6-9464e39e1eab" (UID: "c19aec1e-d990-4682-b4c6-9464e39e1eab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.916628 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.916747 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw6gb\" (UniqueName: \"kubernetes.io/projected/c19aec1e-d990-4682-b4c6-9464e39e1eab-kube-api-access-kw6gb\") on node \"crc\" DevicePath \"\"" Dec 05 09:40:20 crc kubenswrapper[4780]: I1205 09:40:20.916814 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19aec1e-d990-4682-b4c6-9464e39e1eab-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.239193 4780 generic.go:334] "Generic (PLEG): container finished" podID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerID="4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0" exitCode=0 Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.239238 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-856st" event={"ID":"c19aec1e-d990-4682-b4c6-9464e39e1eab","Type":"ContainerDied","Data":"4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0"} Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.239265 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-856st" event={"ID":"c19aec1e-d990-4682-b4c6-9464e39e1eab","Type":"ContainerDied","Data":"c98df4854a905235babfbd9a5ddfe894266c376381b394958d6884b131f3138f"} Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.239280 4780 scope.go:117] "RemoveContainer" containerID="4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0" Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.239401 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-856st" Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.264739 4780 scope.go:117] "RemoveContainer" containerID="875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf" Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.295642 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-856st"] Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.304149 4780 scope.go:117] "RemoveContainer" containerID="28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7" Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.305016 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-856st"] Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.343019 4780 scope.go:117] "RemoveContainer" containerID="4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0" Dec 05 09:40:21 crc kubenswrapper[4780]: E1205 09:40:21.343911 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0\": container with ID starting with 4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0 not found: ID does not exist" containerID="4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0" Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.343943 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0"} err="failed to get container status \"4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0\": rpc error: code = NotFound desc = could not find container \"4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0\": container with ID starting with 4d6433b699dabcd0c466b700a5cab381214992c354a52a09d0d10b4b07eebda0 not found: ID does not exist" Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.343999 4780 scope.go:117] "RemoveContainer" containerID="875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf" Dec 05 09:40:21 crc kubenswrapper[4780]: E1205 09:40:21.344217 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf\": container with ID starting with 875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf not found: ID does not exist" containerID="875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf" Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.344240 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf"} err="failed to get container status \"875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf\": rpc error: code = NotFound desc = could not find container \"875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf\": container with ID starting with 875506cc5fd0a8fceb97188e26edba9e1f93d54028ad395694d449ea0a5ae5cf not found: ID does not exist" Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.344257 4780 scope.go:117] "RemoveContainer" containerID="28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7" Dec 05 09:40:21 crc kubenswrapper[4780]: E1205 09:40:21.344649 4780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7\": container with ID starting with 28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7 not found: ID does not exist" containerID="28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7" Dec 05 09:40:21 crc kubenswrapper[4780]: I1205 09:40:21.344671 4780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7"} err="failed to get container status \"28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7\": rpc error: code = NotFound desc = could not find container \"28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7\": container with ID starting with 28cca41e4309f8849c5b4dae4f9100cff75e592b142ccafcfe154f5d104c41c7 not found: ID does not exist" Dec 05 09:40:22 crc kubenswrapper[4780]: I1205 09:40:22.150375 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c19aec1e-d990-4682-b4c6-9464e39e1eab" path="/var/lib/kubelet/pods/c19aec1e-d990-4682-b4c6-9464e39e1eab/volumes" Dec 05 09:40:24 crc kubenswrapper[4780]: I1205 09:40:24.139534 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:40:24 crc kubenswrapper[4780]: E1205 09:40:24.140075 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:40:28 crc kubenswrapper[4780]: I1205 09:40:28.948358 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-j5f55_8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73/kube-rbac-proxy/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.208598 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-frr-files/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.227392 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-j5f55_8ad0c55e-fd7d-4eec-a4a9-f5eb9c39fa73/controller/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.432758 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-metrics/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.434521 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-frr-files/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.445142 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-reloader/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.505318 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-reloader/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.726312 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-metrics/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.738991 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-frr-files/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.752262 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-reloader/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.762086 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-metrics/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.974635 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-metrics/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.986741 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/controller/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.988247 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-reloader/0.log" Dec 05 09:40:29 crc kubenswrapper[4780]: I1205 09:40:29.990211 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/cp-frr-files/0.log" Dec 05 09:40:30 crc kubenswrapper[4780]: I1205 09:40:30.152035 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/frr-metrics/0.log" Dec 05 09:40:30 crc kubenswrapper[4780]: I1205 09:40:30.160398 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/kube-rbac-proxy/0.log" Dec 05 09:40:30 crc kubenswrapper[4780]: I1205 09:40:30.218415 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/kube-rbac-proxy-frr/0.log" Dec 05 09:40:30 crc kubenswrapper[4780]: I1205 09:40:30.405521 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/reloader/0.log" Dec 05 09:40:30 crc kubenswrapper[4780]: I1205 09:40:30.475996 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-4plmn_df87efac-4c66-45a5-86d4-9a36f7e21a53/frr-k8s-webhook-server/0.log" Dec 05 09:40:30 crc kubenswrapper[4780]: I1205 09:40:30.769108 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-544c44bb58-hzmkv_f65804ab-3d85-427c-9143-5092175e82f9/manager/0.log" Dec 05 09:40:30 crc kubenswrapper[4780]: I1205 09:40:30.952960 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7d695447b7-xktbz_00f67bb2-0ac4-4e3c-b17c-733b12b5fde8/webhook-server/0.log" Dec 05 09:40:31 crc kubenswrapper[4780]: I1205 09:40:31.017198 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gffn7_aa62ab99-7a56-4e90-bbaa-0cd417c05ab2/kube-rbac-proxy/0.log" Dec 05 09:40:31 crc kubenswrapper[4780]: I1205 09:40:31.947736 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gffn7_aa62ab99-7a56-4e90-bbaa-0cd417c05ab2/speaker/0.log" Dec 05 09:40:33 crc kubenswrapper[4780]: I1205 09:40:33.453272 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-27nsf_b354ba59-4664-4c61-abe6-e31896facfa5/frr/0.log" Dec 05 09:40:35 crc kubenswrapper[4780]: I1205 09:40:35.139147 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:40:35 crc kubenswrapper[4780]: E1205 09:40:35.139729 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:40:44 crc kubenswrapper[4780]: I1205 09:40:44.300798 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx_17bffe80-37bd-4fa7-8db9-fd583dbe069e/util/0.log" Dec 05 09:40:44 crc kubenswrapper[4780]: I1205 09:40:44.552038 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx_17bffe80-37bd-4fa7-8db9-fd583dbe069e/util/0.log" Dec 05 09:40:44 crc kubenswrapper[4780]: I1205 09:40:44.607750 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx_17bffe80-37bd-4fa7-8db9-fd583dbe069e/pull/0.log" Dec 05 09:40:44 crc kubenswrapper[4780]: I1205 09:40:44.664776 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx_17bffe80-37bd-4fa7-8db9-fd583dbe069e/pull/0.log" Dec 05 09:40:44 crc kubenswrapper[4780]: I1205 09:40:44.819198 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx_17bffe80-37bd-4fa7-8db9-fd583dbe069e/util/0.log" Dec 05 09:40:44 crc kubenswrapper[4780]: I1205 09:40:44.865393 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx_17bffe80-37bd-4fa7-8db9-fd583dbe069e/pull/0.log" Dec 05 09:40:44 crc kubenswrapper[4780]: I1205 09:40:44.892831 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87pkx_17bffe80-37bd-4fa7-8db9-fd583dbe069e/extract/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.019034 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7_bb460ef0-02da-47c6-81c5-4c6ccc81e705/util/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.146675 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7_bb460ef0-02da-47c6-81c5-4c6ccc81e705/util/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.196636 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7_bb460ef0-02da-47c6-81c5-4c6ccc81e705/pull/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.196793 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7_bb460ef0-02da-47c6-81c5-4c6ccc81e705/pull/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.346161 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7_bb460ef0-02da-47c6-81c5-4c6ccc81e705/util/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.355525 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7_bb460ef0-02da-47c6-81c5-4c6ccc81e705/pull/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.386214 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f4pcs7_bb460ef0-02da-47c6-81c5-4c6ccc81e705/extract/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.510724 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67_136bf2be-c593-48b8-9531-e92937442594/util/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.686927 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67_136bf2be-c593-48b8-9531-e92937442594/util/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.709186 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67_136bf2be-c593-48b8-9531-e92937442594/pull/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.717553 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67_136bf2be-c593-48b8-9531-e92937442594/pull/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.878314 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67_136bf2be-c593-48b8-9531-e92937442594/util/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.930161 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67_136bf2be-c593-48b8-9531-e92937442594/extract/0.log" Dec 05 09:40:45 crc kubenswrapper[4780]: I1205 09:40:45.931147 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210ghb67_136bf2be-c593-48b8-9531-e92937442594/pull/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.057844 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk_978f8e4f-0b0b-4604-a233-6c85dd81376b/util/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.247562 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk_978f8e4f-0b0b-4604-a233-6c85dd81376b/util/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.267221 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk_978f8e4f-0b0b-4604-a233-6c85dd81376b/pull/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.331370 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk_978f8e4f-0b0b-4604-a233-6c85dd81376b/pull/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.474967 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk_978f8e4f-0b0b-4604-a233-6c85dd81376b/util/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.492002 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk_978f8e4f-0b0b-4604-a233-6c85dd81376b/pull/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.522461 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wjgpk_978f8e4f-0b0b-4604-a233-6c85dd81376b/extract/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.629475 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gk48d_3122616e-f363-4387-9683-be6ea9c09964/extract-utilities/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.827065 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gk48d_3122616e-f363-4387-9683-be6ea9c09964/extract-content/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.830665 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gk48d_3122616e-f363-4387-9683-be6ea9c09964/extract-content/0.log" Dec 05 09:40:46 crc kubenswrapper[4780]: I1205 09:40:46.834301 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gk48d_3122616e-f363-4387-9683-be6ea9c09964/extract-utilities/0.log" Dec 05 09:40:47 crc kubenswrapper[4780]: I1205 09:40:47.001799 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gk48d_3122616e-f363-4387-9683-be6ea9c09964/extract-utilities/0.log" Dec 05 09:40:47 crc kubenswrapper[4780]: I1205 09:40:47.038888 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gk48d_3122616e-f363-4387-9683-be6ea9c09964/extract-content/0.log" Dec 05 09:40:47 crc kubenswrapper[4780]: I1205 09:40:47.284471 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6jqst_b276c8de-e39f-4b60-a6bc-d08e9085e7c4/extract-utilities/0.log" Dec 05 09:40:47 crc kubenswrapper[4780]: I1205 09:40:47.437337 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6jqst_b276c8de-e39f-4b60-a6bc-d08e9085e7c4/extract-utilities/0.log" Dec 05 09:40:47 crc kubenswrapper[4780]: I1205 09:40:47.527560 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6jqst_b276c8de-e39f-4b60-a6bc-d08e9085e7c4/extract-content/0.log" Dec 05 09:40:47 crc kubenswrapper[4780]: I1205 09:40:47.554067 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6jqst_b276c8de-e39f-4b60-a6bc-d08e9085e7c4/extract-content/0.log" Dec 05 09:40:47 crc kubenswrapper[4780]: I1205 09:40:47.587519 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gk48d_3122616e-f363-4387-9683-be6ea9c09964/registry-server/0.log" Dec 05 09:40:47 crc kubenswrapper[4780]: I1205 09:40:47.746809 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6jqst_b276c8de-e39f-4b60-a6bc-d08e9085e7c4/extract-utilities/0.log" Dec 05 09:40:47 crc kubenswrapper[4780]: I1205 09:40:47.785075 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6jqst_b276c8de-e39f-4b60-a6bc-d08e9085e7c4/extract-content/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.031082 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lptmc_a3732e56-d979-4da4-88e7-bf3e0aa77daf/marketplace-operator/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.139064 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:40:48 crc kubenswrapper[4780]: E1205 09:40:48.139572 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.140977 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6jqst_b276c8de-e39f-4b60-a6bc-d08e9085e7c4/registry-server/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.181149 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hztnh_8973937a-3238-47fa-b653-5f2e1cf63d9c/extract-utilities/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.305020 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hztnh_8973937a-3238-47fa-b653-5f2e1cf63d9c/extract-content/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.321628 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hztnh_8973937a-3238-47fa-b653-5f2e1cf63d9c/extract-utilities/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.366422 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hztnh_8973937a-3238-47fa-b653-5f2e1cf63d9c/extract-content/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.471964 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hztnh_8973937a-3238-47fa-b653-5f2e1cf63d9c/extract-utilities/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.530845 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hztnh_8973937a-3238-47fa-b653-5f2e1cf63d9c/extract-content/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.567423 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9j9w_4dce830d-a940-4f68-95fa-922479207512/extract-utilities/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.772985 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9j9w_4dce830d-a940-4f68-95fa-922479207512/extract-utilities/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.827504 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hztnh_8973937a-3238-47fa-b653-5f2e1cf63d9c/registry-server/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.848378 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9j9w_4dce830d-a940-4f68-95fa-922479207512/extract-content/0.log" Dec 05 09:40:48 crc kubenswrapper[4780]: I1205 09:40:48.858770 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9j9w_4dce830d-a940-4f68-95fa-922479207512/extract-content/0.log" Dec 05 09:40:49 crc kubenswrapper[4780]: I1205 09:40:49.079150 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9j9w_4dce830d-a940-4f68-95fa-922479207512/extract-utilities/0.log" Dec 05 09:40:49 crc kubenswrapper[4780]: I1205 09:40:49.098602 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9j9w_4dce830d-a940-4f68-95fa-922479207512/extract-content/0.log" Dec 05 09:40:50 crc kubenswrapper[4780]: I1205 09:40:50.252101 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9j9w_4dce830d-a940-4f68-95fa-922479207512/registry-server/0.log" Dec 05 09:41:00 crc kubenswrapper[4780]: I1205 09:41:00.139336 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:41:00 crc kubenswrapper[4780]: E1205 09:41:00.140534 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:41:00 crc kubenswrapper[4780]: I1205 09:41:00.762493 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-nqsf9_aed1375c-8cad-45e8-b1e1-9ffce12b6191/prometheus-operator/0.log" Dec 05 09:41:00 crc kubenswrapper[4780]: I1205 09:41:00.906790 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f79dddddf-6zvbc_b0730c81-1449-4ce1-a29e-a5a57e06b444/prometheus-operator-admission-webhook/0.log" Dec 05 09:41:00 crc kubenswrapper[4780]: I1205 09:41:00.961115 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f79dddddf-z6cq8_2af3a1a3-37b1-4fec-a413-f898353aa3f8/prometheus-operator-admission-webhook/0.log" Dec 05 09:41:01 crc kubenswrapper[4780]: I1205 09:41:01.124655 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-vvh49_66757364-1218-441c-8f46-57bbd91142f8/operator/0.log" Dec 05 09:41:01 crc kubenswrapper[4780]: I1205 09:41:01.185216 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-hvxtn_c72d36a9-d0a9-4cea-9cbe-930e2435e813/perses-operator/0.log" Dec 05 09:41:11 crc kubenswrapper[4780]: I1205 09:41:11.138731 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:41:11 crc kubenswrapper[4780]: E1205 09:41:11.139469 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:41:23 crc kubenswrapper[4780]: I1205 09:41:23.138601 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:41:23 crc kubenswrapper[4780]: E1205 09:41:23.139408 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:41:37 crc kubenswrapper[4780]: I1205 09:41:37.139386 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:41:37 crc kubenswrapper[4780]: E1205 09:41:37.140274 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:41:49 crc kubenswrapper[4780]: I1205 09:41:49.139363 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:41:49 crc kubenswrapper[4780]: E1205 09:41:49.140200 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:42:01 crc kubenswrapper[4780]: I1205 09:42:01.138543 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:42:01 crc kubenswrapper[4780]: E1205 09:42:01.139395 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:42:16 crc kubenswrapper[4780]: I1205 09:42:16.147385 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:42:16 crc kubenswrapper[4780]: E1205 09:42:16.148255 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:42:29 crc kubenswrapper[4780]: I1205 09:42:29.139438 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:42:29 crc kubenswrapper[4780]: E1205 09:42:29.140141 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:42:37 crc kubenswrapper[4780]: I1205 09:42:37.025467 4780 scope.go:117] "RemoveContainer" containerID="faadde9b4f9fbd1bd6710474132412ab67b426e74c295ef17aac085af34e60ce" Dec 05 09:42:41 crc kubenswrapper[4780]: I1205 09:42:41.138851 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:42:41 crc kubenswrapper[4780]: E1205 09:42:41.139665 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:42:53 crc kubenswrapper[4780]: I1205 09:42:53.140045 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:42:53 crc kubenswrapper[4780]: E1205 09:42:53.141383 4780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjftd_openshift-machine-config-operator(a640087b-e493-4ac1-bef1-a9c05dd7c0ad)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" Dec 05 09:43:05 crc kubenswrapper[4780]: I1205 09:43:05.138938 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:43:05 crc kubenswrapper[4780]: I1205 09:43:05.814143 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"660054d20cec5baf5f52847fb833dbacdff136ffa17aea9a7fcf3d0ff127080f"} Dec 05 09:43:17 crc kubenswrapper[4780]: I1205 09:43:17.930601 4780 generic.go:334] "Generic (PLEG): container finished" podID="39e33c36-514a-48aa-ac3d-8f5988371fc7" containerID="7657ed80e699366adc2316ca16297e32d48b536bbc2b63c716cb85a5973fc8e2" exitCode=0 Dec 05 09:43:17 crc kubenswrapper[4780]: I1205 09:43:17.930648 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-spv9c/must-gather-t8z9v" event={"ID":"39e33c36-514a-48aa-ac3d-8f5988371fc7","Type":"ContainerDied","Data":"7657ed80e699366adc2316ca16297e32d48b536bbc2b63c716cb85a5973fc8e2"} Dec 05 09:43:17 crc kubenswrapper[4780]: I1205 09:43:17.931847 4780 scope.go:117] "RemoveContainer" containerID="7657ed80e699366adc2316ca16297e32d48b536bbc2b63c716cb85a5973fc8e2" Dec 05 09:43:18 crc kubenswrapper[4780]: I1205 09:43:18.606478 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-spv9c_must-gather-t8z9v_39e33c36-514a-48aa-ac3d-8f5988371fc7/gather/0.log" Dec 05 09:43:30 crc kubenswrapper[4780]: I1205 09:43:30.754740 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-spv9c/must-gather-t8z9v"] Dec 05 09:43:30 crc kubenswrapper[4780]: I1205 09:43:30.757338 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-spv9c/must-gather-t8z9v" podUID="39e33c36-514a-48aa-ac3d-8f5988371fc7" containerName="copy" containerID="cri-o://a8b2e641797330c31494d40f5970b4d01834b7403e4060cd5137c10d3e6fc429" gracePeriod=2 Dec 05 09:43:30 crc kubenswrapper[4780]: I1205 09:43:30.764642 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-spv9c/must-gather-t8z9v"] Dec 05 09:43:31 crc kubenswrapper[4780]: I1205 09:43:31.087970 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-spv9c_must-gather-t8z9v_39e33c36-514a-48aa-ac3d-8f5988371fc7/copy/0.log" Dec 05 09:43:31 crc kubenswrapper[4780]: I1205 09:43:31.088853 4780 generic.go:334] "Generic (PLEG): container finished" podID="39e33c36-514a-48aa-ac3d-8f5988371fc7" containerID="a8b2e641797330c31494d40f5970b4d01834b7403e4060cd5137c10d3e6fc429" exitCode=143 Dec 05 09:43:31 crc kubenswrapper[4780]: I1205 09:43:31.874094 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-spv9c_must-gather-t8z9v_39e33c36-514a-48aa-ac3d-8f5988371fc7/copy/0.log" Dec 05 09:43:31 crc kubenswrapper[4780]: I1205 09:43:31.875657 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/must-gather-t8z9v" Dec 05 09:43:32 crc kubenswrapper[4780]: I1205 09:43:32.003856 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/39e33c36-514a-48aa-ac3d-8f5988371fc7-must-gather-output\") pod \"39e33c36-514a-48aa-ac3d-8f5988371fc7\" (UID: \"39e33c36-514a-48aa-ac3d-8f5988371fc7\") " Dec 05 09:43:32 crc kubenswrapper[4780]: I1205 09:43:32.004113 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmzc4\" (UniqueName: \"kubernetes.io/projected/39e33c36-514a-48aa-ac3d-8f5988371fc7-kube-api-access-wmzc4\") pod \"39e33c36-514a-48aa-ac3d-8f5988371fc7\" (UID: \"39e33c36-514a-48aa-ac3d-8f5988371fc7\") " Dec 05 09:43:32 crc kubenswrapper[4780]: I1205 09:43:32.011143 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e33c36-514a-48aa-ac3d-8f5988371fc7-kube-api-access-wmzc4" (OuterVolumeSpecName: "kube-api-access-wmzc4") pod "39e33c36-514a-48aa-ac3d-8f5988371fc7" (UID: "39e33c36-514a-48aa-ac3d-8f5988371fc7"). InnerVolumeSpecName "kube-api-access-wmzc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:43:32 crc kubenswrapper[4780]: I1205 09:43:32.099663 4780 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-spv9c_must-gather-t8z9v_39e33c36-514a-48aa-ac3d-8f5988371fc7/copy/0.log" Dec 05 09:43:32 crc kubenswrapper[4780]: I1205 09:43:32.100254 4780 scope.go:117] "RemoveContainer" containerID="a8b2e641797330c31494d40f5970b4d01834b7403e4060cd5137c10d3e6fc429" Dec 05 09:43:32 crc kubenswrapper[4780]: I1205 09:43:32.100599 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-spv9c/must-gather-t8z9v" Dec 05 09:43:32 crc kubenswrapper[4780]: I1205 09:43:32.107173 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmzc4\" (UniqueName: \"kubernetes.io/projected/39e33c36-514a-48aa-ac3d-8f5988371fc7-kube-api-access-wmzc4\") on node \"crc\" DevicePath \"\"" Dec 05 09:43:32 crc kubenswrapper[4780]: I1205 09:43:32.138486 4780 scope.go:117] "RemoveContainer" containerID="7657ed80e699366adc2316ca16297e32d48b536bbc2b63c716cb85a5973fc8e2" Dec 05 09:43:32 crc kubenswrapper[4780]: I1205 09:43:32.214467 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e33c36-514a-48aa-ac3d-8f5988371fc7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "39e33c36-514a-48aa-ac3d-8f5988371fc7" (UID: "39e33c36-514a-48aa-ac3d-8f5988371fc7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:43:32 crc kubenswrapper[4780]: I1205 09:43:32.312681 4780 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/39e33c36-514a-48aa-ac3d-8f5988371fc7-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 05 09:43:34 crc kubenswrapper[4780]: I1205 09:43:34.149674 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e33c36-514a-48aa-ac3d-8f5988371fc7" path="/var/lib/kubelet/pods/39e33c36-514a-48aa-ac3d-8f5988371fc7/volumes" Dec 05 09:43:37 crc kubenswrapper[4780]: I1205 09:43:37.083430 4780 scope.go:117] "RemoveContainer" containerID="4823d433fdc9404220c9c187e666bffe89a1617111eb612b093755b9f9c92236" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.391011 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chvbc"] Dec 05 09:44:21 crc kubenswrapper[4780]: E1205 09:44:21.393373 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerName="registry-server" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.393412 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerName="registry-server" Dec 05 09:44:21 crc kubenswrapper[4780]: E1205 09:44:21.393434 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e33c36-514a-48aa-ac3d-8f5988371fc7" containerName="gather" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.393443 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e33c36-514a-48aa-ac3d-8f5988371fc7" containerName="gather" Dec 05 09:44:21 crc kubenswrapper[4780]: E1205 09:44:21.393460 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e33c36-514a-48aa-ac3d-8f5988371fc7" containerName="copy" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.393470 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e33c36-514a-48aa-ac3d-8f5988371fc7" containerName="copy" Dec 05 09:44:21 crc kubenswrapper[4780]: E1205 09:44:21.393497 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerName="extract-utilities" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.393505 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerName="extract-utilities" Dec 05 09:44:21 crc kubenswrapper[4780]: E1205 09:44:21.393514 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerName="extract-content" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.393519 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerName="extract-content" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.393718 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e33c36-514a-48aa-ac3d-8f5988371fc7" containerName="copy" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.393738 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e33c36-514a-48aa-ac3d-8f5988371fc7" containerName="gather" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.393749 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19aec1e-d990-4682-b4c6-9464e39e1eab" containerName="registry-server" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.395424 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.403515 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chvbc"] Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.494215 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-utilities\") pod \"redhat-operators-chvbc\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.494589 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-catalog-content\") pod \"redhat-operators-chvbc\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.494798 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77f5w\" (UniqueName: \"kubernetes.io/projected/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-kube-api-access-77f5w\") pod \"redhat-operators-chvbc\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.597025 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-utilities\") pod \"redhat-operators-chvbc\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.597203 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-catalog-content\") pod \"redhat-operators-chvbc\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.597288 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77f5w\" (UniqueName: \"kubernetes.io/projected/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-kube-api-access-77f5w\") pod \"redhat-operators-chvbc\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.597486 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-utilities\") pod \"redhat-operators-chvbc\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.597518 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-catalog-content\") pod \"redhat-operators-chvbc\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.618126 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77f5w\" (UniqueName: \"kubernetes.io/projected/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-kube-api-access-77f5w\") pod \"redhat-operators-chvbc\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:21 crc kubenswrapper[4780]: I1205 09:44:21.723607 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:22 crc kubenswrapper[4780]: I1205 09:44:22.180659 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chvbc"] Dec 05 09:44:22 crc kubenswrapper[4780]: I1205 09:44:22.548093 4780 generic.go:334] "Generic (PLEG): container finished" podID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerID="2030ea7c71d153d09806ab92e1e835913b8ba05f848093155513fd03c200428a" exitCode=0 Dec 05 09:44:22 crc kubenswrapper[4780]: I1205 09:44:22.548251 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvbc" event={"ID":"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c","Type":"ContainerDied","Data":"2030ea7c71d153d09806ab92e1e835913b8ba05f848093155513fd03c200428a"} Dec 05 09:44:22 crc kubenswrapper[4780]: I1205 09:44:22.549534 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvbc" event={"ID":"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c","Type":"ContainerStarted","Data":"7f9ed429add8b54b20a6d981f2869151f26913cc718be2e4e3bacefca0e47ba0"} Dec 05 09:44:23 crc kubenswrapper[4780]: I1205 09:44:23.560632 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvbc" event={"ID":"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c","Type":"ContainerStarted","Data":"074a6ba403d0fe55ba4bdeb5985ada276257a9c74838b1605a4f442a90fced4a"} Dec 05 09:44:25 crc kubenswrapper[4780]: I1205 09:44:25.579908 4780 generic.go:334] "Generic (PLEG): container finished" podID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerID="074a6ba403d0fe55ba4bdeb5985ada276257a9c74838b1605a4f442a90fced4a" exitCode=0 Dec 05 09:44:25 crc kubenswrapper[4780]: I1205 09:44:25.579936 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvbc" event={"ID":"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c","Type":"ContainerDied","Data":"074a6ba403d0fe55ba4bdeb5985ada276257a9c74838b1605a4f442a90fced4a"} Dec 05 09:44:26 crc kubenswrapper[4780]: I1205 09:44:26.592043 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvbc" event={"ID":"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c","Type":"ContainerStarted","Data":"8eafad6cd6a99600ad8d4d35e0efded159e819a6eb2003a0665eb6c462701699"} Dec 05 09:44:26 crc kubenswrapper[4780]: I1205 09:44:26.610935 4780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chvbc" podStartSLOduration=2.014189069 podStartE2EDuration="5.610919639s" podCreationTimestamp="2025-12-05 09:44:21 +0000 UTC" firstStartedPulling="2025-12-05 09:44:22.549871334 +0000 UTC m=+10696.619387666" lastFinishedPulling="2025-12-05 09:44:26.146601904 +0000 UTC m=+10700.216118236" observedRunningTime="2025-12-05 09:44:26.608267308 +0000 UTC m=+10700.677783640" watchObservedRunningTime="2025-12-05 09:44:26.610919639 +0000 UTC m=+10700.680435971" Dec 05 09:44:31 crc kubenswrapper[4780]: I1205 09:44:31.724403 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:31 crc kubenswrapper[4780]: I1205 09:44:31.725067 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:32 crc kubenswrapper[4780]: I1205 09:44:32.514553 4780 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:32 crc kubenswrapper[4780]: I1205 09:44:32.698672 4780 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:32 crc kubenswrapper[4780]: I1205 09:44:32.756247 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chvbc"] Dec 05 09:44:34 crc kubenswrapper[4780]: I1205 09:44:34.667300 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chvbc" podUID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerName="registry-server" containerID="cri-o://8eafad6cd6a99600ad8d4d35e0efded159e819a6eb2003a0665eb6c462701699" gracePeriod=2 Dec 05 09:44:35 crc kubenswrapper[4780]: I1205 09:44:35.679202 4780 generic.go:334] "Generic (PLEG): container finished" podID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerID="8eafad6cd6a99600ad8d4d35e0efded159e819a6eb2003a0665eb6c462701699" exitCode=0 Dec 05 09:44:35 crc kubenswrapper[4780]: I1205 09:44:35.679284 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvbc" event={"ID":"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c","Type":"ContainerDied","Data":"8eafad6cd6a99600ad8d4d35e0efded159e819a6eb2003a0665eb6c462701699"} Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.215337 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.398066 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-utilities\") pod \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.398473 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77f5w\" (UniqueName: \"kubernetes.io/projected/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-kube-api-access-77f5w\") pod \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.398806 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-catalog-content\") pod \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\" (UID: \"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c\") " Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.399274 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-utilities" (OuterVolumeSpecName: "utilities") pod "2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" (UID: "2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.405753 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-kube-api-access-77f5w" (OuterVolumeSpecName: "kube-api-access-77f5w") pod "2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" (UID: "2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c"). InnerVolumeSpecName "kube-api-access-77f5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.501607 4780 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.501646 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77f5w\" (UniqueName: \"kubernetes.io/projected/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-kube-api-access-77f5w\") on node \"crc\" DevicePath \"\"" Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.507167 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" (UID: "2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.603343 4780 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.690554 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chvbc" event={"ID":"2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c","Type":"ContainerDied","Data":"7f9ed429add8b54b20a6d981f2869151f26913cc718be2e4e3bacefca0e47ba0"} Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.690599 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chvbc" Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.690621 4780 scope.go:117] "RemoveContainer" containerID="8eafad6cd6a99600ad8d4d35e0efded159e819a6eb2003a0665eb6c462701699" Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.711940 4780 scope.go:117] "RemoveContainer" containerID="074a6ba403d0fe55ba4bdeb5985ada276257a9c74838b1605a4f442a90fced4a" Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.728657 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chvbc"] Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.737434 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chvbc"] Dec 05 09:44:36 crc kubenswrapper[4780]: I1205 09:44:36.750751 4780 scope.go:117] "RemoveContainer" containerID="2030ea7c71d153d09806ab92e1e835913b8ba05f848093155513fd03c200428a" Dec 05 09:44:38 crc kubenswrapper[4780]: I1205 09:44:38.156568 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" path="/var/lib/kubelet/pods/2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c/volumes" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.170034 4780 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh"] Dec 05 09:45:00 crc kubenswrapper[4780]: E1205 09:45:00.171077 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerName="extract-content" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.171097 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerName="extract-content" Dec 05 09:45:00 crc kubenswrapper[4780]: E1205 09:45:00.171133 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerName="extract-utilities" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.171177 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerName="extract-utilities" Dec 05 09:45:00 crc kubenswrapper[4780]: E1205 09:45:00.171221 4780 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerName="registry-server" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.171234 4780 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerName="registry-server" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.171481 4780 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9538ab-02f9-4d4f-a4c3-6bdb4d904f1c" containerName="registry-server" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.172355 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.174634 4780 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.174909 4780 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.184416 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh"] Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.270009 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e9007c-047e-410f-a566-3859bbc92699-config-volume\") pod \"collect-profiles-29415465-jd9nh\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.270375 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5n4p\" (UniqueName: \"kubernetes.io/projected/19e9007c-047e-410f-a566-3859bbc92699-kube-api-access-c5n4p\") pod \"collect-profiles-29415465-jd9nh\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.270407 4780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e9007c-047e-410f-a566-3859bbc92699-secret-volume\") pod \"collect-profiles-29415465-jd9nh\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.373372 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e9007c-047e-410f-a566-3859bbc92699-config-volume\") pod \"collect-profiles-29415465-jd9nh\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.373442 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5n4p\" (UniqueName: \"kubernetes.io/projected/19e9007c-047e-410f-a566-3859bbc92699-kube-api-access-c5n4p\") pod \"collect-profiles-29415465-jd9nh\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.373516 4780 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e9007c-047e-410f-a566-3859bbc92699-secret-volume\") pod \"collect-profiles-29415465-jd9nh\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.374487 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e9007c-047e-410f-a566-3859bbc92699-config-volume\") pod \"collect-profiles-29415465-jd9nh\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.379746 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e9007c-047e-410f-a566-3859bbc92699-secret-volume\") pod \"collect-profiles-29415465-jd9nh\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.388553 4780 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5n4p\" (UniqueName: \"kubernetes.io/projected/19e9007c-047e-410f-a566-3859bbc92699-kube-api-access-c5n4p\") pod \"collect-profiles-29415465-jd9nh\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.497174 4780 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:00 crc kubenswrapper[4780]: I1205 09:45:00.941395 4780 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh"] Dec 05 09:45:01 crc kubenswrapper[4780]: I1205 09:45:01.914054 4780 generic.go:334] "Generic (PLEG): container finished" podID="19e9007c-047e-410f-a566-3859bbc92699" containerID="9776bba2dd1c54abc51a6c0fb5e8847519f30b076d2df2e1ac9192a3e61ae50a" exitCode=0 Dec 05 09:45:01 crc kubenswrapper[4780]: I1205 09:45:01.914106 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" event={"ID":"19e9007c-047e-410f-a566-3859bbc92699","Type":"ContainerDied","Data":"9776bba2dd1c54abc51a6c0fb5e8847519f30b076d2df2e1ac9192a3e61ae50a"} Dec 05 09:45:01 crc kubenswrapper[4780]: I1205 09:45:01.914351 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" event={"ID":"19e9007c-047e-410f-a566-3859bbc92699","Type":"ContainerStarted","Data":"dd6d8be4a1979de752efb8903e376800ae721fa348e94ed5510a41d7f18a8e4c"} Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.259627 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.434548 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e9007c-047e-410f-a566-3859bbc92699-secret-volume\") pod \"19e9007c-047e-410f-a566-3859bbc92699\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.435032 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5n4p\" (UniqueName: \"kubernetes.io/projected/19e9007c-047e-410f-a566-3859bbc92699-kube-api-access-c5n4p\") pod \"19e9007c-047e-410f-a566-3859bbc92699\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.435222 4780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e9007c-047e-410f-a566-3859bbc92699-config-volume\") pod \"19e9007c-047e-410f-a566-3859bbc92699\" (UID: \"19e9007c-047e-410f-a566-3859bbc92699\") " Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.435809 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e9007c-047e-410f-a566-3859bbc92699-config-volume" (OuterVolumeSpecName: "config-volume") pod "19e9007c-047e-410f-a566-3859bbc92699" (UID: "19e9007c-047e-410f-a566-3859bbc92699"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.440786 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e9007c-047e-410f-a566-3859bbc92699-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "19e9007c-047e-410f-a566-3859bbc92699" (UID: "19e9007c-047e-410f-a566-3859bbc92699"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.440876 4780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e9007c-047e-410f-a566-3859bbc92699-kube-api-access-c5n4p" (OuterVolumeSpecName: "kube-api-access-c5n4p") pod "19e9007c-047e-410f-a566-3859bbc92699" (UID: "19e9007c-047e-410f-a566-3859bbc92699"). InnerVolumeSpecName "kube-api-access-c5n4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.537331 4780 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e9007c-047e-410f-a566-3859bbc92699-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.537364 4780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5n4p\" (UniqueName: \"kubernetes.io/projected/19e9007c-047e-410f-a566-3859bbc92699-kube-api-access-c5n4p\") on node \"crc\" DevicePath \"\"" Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.537373 4780 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e9007c-047e-410f-a566-3859bbc92699-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.934033 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" event={"ID":"19e9007c-047e-410f-a566-3859bbc92699","Type":"ContainerDied","Data":"dd6d8be4a1979de752efb8903e376800ae721fa348e94ed5510a41d7f18a8e4c"} Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.934071 4780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd6d8be4a1979de752efb8903e376800ae721fa348e94ed5510a41d7f18a8e4c" Dec 05 09:45:03 crc kubenswrapper[4780]: I1205 09:45:03.934120 4780 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415465-jd9nh" Dec 05 09:45:04 crc kubenswrapper[4780]: I1205 09:45:04.326957 4780 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc"] Dec 05 09:45:04 crc kubenswrapper[4780]: I1205 09:45:04.339845 4780 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415420-z9smc"] Dec 05 09:45:06 crc kubenswrapper[4780]: I1205 09:45:06.153786 4780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beabb19d-470d-42a1-9db2-ca90b0880b88" path="/var/lib/kubelet/pods/beabb19d-470d-42a1-9db2-ca90b0880b88/volumes" Dec 05 09:45:29 crc kubenswrapper[4780]: I1205 09:45:29.907818 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:45:29 crc kubenswrapper[4780]: I1205 09:45:29.908736 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:45:37 crc kubenswrapper[4780]: I1205 09:45:37.208733 4780 scope.go:117] "RemoveContainer" containerID="486335de1cb1adfeda303488bf2bd868ebd3ff9a1235b6332261ffd7a0b7b78f" Dec 05 09:45:59 crc kubenswrapper[4780]: I1205 09:45:59.908417 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:45:59 crc kubenswrapper[4780]: I1205 09:45:59.910151 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:46:29 crc kubenswrapper[4780]: I1205 09:46:29.907527 4780 patch_prober.go:28] interesting pod/machine-config-daemon-mjftd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 09:46:29 crc kubenswrapper[4780]: I1205 09:46:29.908103 4780 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 09:46:29 crc kubenswrapper[4780]: I1205 09:46:29.908142 4780 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" Dec 05 09:46:29 crc kubenswrapper[4780]: I1205 09:46:29.908917 4780 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"660054d20cec5baf5f52847fb833dbacdff136ffa17aea9a7fcf3d0ff127080f"} pod="openshift-machine-config-operator/machine-config-daemon-mjftd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 09:46:29 crc kubenswrapper[4780]: I1205 09:46:29.908969 4780 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" podUID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerName="machine-config-daemon" containerID="cri-o://660054d20cec5baf5f52847fb833dbacdff136ffa17aea9a7fcf3d0ff127080f" gracePeriod=600 Dec 05 09:46:30 crc kubenswrapper[4780]: I1205 09:46:30.839308 4780 generic.go:334] "Generic (PLEG): container finished" podID="a640087b-e493-4ac1-bef1-a9c05dd7c0ad" containerID="660054d20cec5baf5f52847fb833dbacdff136ffa17aea9a7fcf3d0ff127080f" exitCode=0 Dec 05 09:46:30 crc kubenswrapper[4780]: I1205 09:46:30.839351 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerDied","Data":"660054d20cec5baf5f52847fb833dbacdff136ffa17aea9a7fcf3d0ff127080f"} Dec 05 09:46:30 crc kubenswrapper[4780]: I1205 09:46:30.840124 4780 scope.go:117] "RemoveContainer" containerID="55ab937008cd876bfa77093d83cd819245f58308e58ac3a40443bcc7c42ae242" Dec 05 09:46:30 crc kubenswrapper[4780]: I1205 09:46:30.841134 4780 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjftd" event={"ID":"a640087b-e493-4ac1-bef1-a9c05dd7c0ad","Type":"ContainerStarted","Data":"4da8ed06b111f986290ab091fdb7cd5bdfbc1c10d8cf7ba8c217c47c6d372c35"}